Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
8,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Experiment
Step1: Load and check data
Step2: ## Analysis
Experiment Details
Step3: Plot accuracy over epochs | Python Code:
%load_ext autoreload
%autoreload 2
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import glob
import tabulate
import pprint
import click
import numpy as np
import pandas as pd
from ray.tune.commands import *
from nupic.research.frameworks.dynamic_sparse.common.browser import *
import matplotlib.pyplot as plt
from matplotlib import rcParams
from scipy.ndimage.filters import gaussian_filter1d
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set(style="whitegrid")
sns.set_palette("colorblind")
Explanation: Experiment:
Evaluate pruning by magnitude weighted by coactivations (more thorough evaluation), compare it to baseline (SET), in GSC. Applied only to linear layers
Motivation.
Check if results are consistently above baseline.
Conclusion
End of explanation
exps = ['comparison_pruning_2' , 'comparison_iterative_pruning_2', 'comparison_set_2']
paths = [os.path.expanduser("~/nta/results/{}".format(e)) for e in exps]
df = load_many(paths)
df.head(5)
df.shape
df.columns
df['model'].unique()
# calculate density for each model
df.loc[df['model'] == 'PruningModel', 'density'] = df.loc[df['model'] == 'PruningModel', 'target_final_density']
df.loc[df['model'] == 'IterativePruningModel', 'density'] = df.loc[df['model'] == 'IterativePruningModel', 'target_final_density']
df.loc[df['model'] == 'SET', 'density'] = df.loc[df['model'] == 'SET', 'on_perc']
Explanation: Load and check data
End of explanation
# Did any trials failed?
num_epochs = 200
df[df["epochs"]<num_epochs]["epochs"].count()
# Removing failed or incomplete trials
df_origin = df.copy()
df = df_origin[df_origin["epochs"]>=30]
df.shape
# helper functions
def mean_and_std(s):
return "{:.3f} ± {:.3f}".format(s.mean(), s.std())
def round_mean(s):
return "{:.0f}".format(round(s.mean()))
stats = ['min', 'max', 'mean', 'std']
def agg(columns, filter=None, round=3):
if filter is None:
return (df.groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
else:
return (df[filter].groupby(columns)
.agg({'val_acc_max_epoch': round_mean,
'val_acc_max': stats,
'model': ['count']})).round(round)
agg(['density', 'model'])
Explanation: ## Analysis
Experiment Details
End of explanation
# translate model names
rcParams['figure.figsize'] = 16, 8
sns.scatterplot(data=df, x='density', y='val_acc_max', hue='model')
sns.lineplot(data=df, x='density', y='val_acc_max', hue='model', legend=False);
Explanation: Plot accuracy over epochs
End of explanation |
8,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
FireEye Health Insurance 2016 Analysis
Individual Plans (Employee only)
Assumptions and Notes
Step1: Helper functions
Step2: Plan cost functions
Step3: Sanity Tests
Zero costs
Step4: Cost greater than HSA and deductible
Step5: Individual Cost | Python Code:
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
class Plan: pass
# Plan 1 = Cigna HDHP/HSA
p1 = Plan()
p1.family_deductible = 2000.00 # Same deductible for both family and individual for the HDHP
p1.individual_deductible = 2000.00
p1.family_oopmax = 3000.00 # Same out-of-pocket max for family and individual for the HDHP
p1.individual_oopmax = 3000.00
p1.premium_monthly = 0.0
p1.hsa_contribution = 1200.00
p1.coinsurance_rate = 0.1
# Plan 2 = Cigna PPO $1000
p2 = Plan()
p2.family_deductible = 1000.00 #N/A for individual simulation
p2.individual_deductible = 1000.00
p2.family_oopmax = 4000.00 # N/A for individual simulation
p2.individual_oopmax = 4000.00
p2.premium_monthly = 0
p2.hsa_contribution = 0.0
p2.coinsurance_rate = 0.2
# Plan 3 = Cigna PPO $500
p3 = Plan()
p3.family_deductible = 500.00 # N/A for individual simulation
p3.individual_deductible = 500.00
p3.family_oopmax = 3500.00 # N/A for individual simulation
p3.individual_oopmax = 3500.00
p3.premium_monthly = 21*2 # price/pay period * 2 pay periods/month
p3.hsa_contribution = 0.0
p3.coinsurance_rate = 0.1
Explanation: FireEye Health Insurance 2016 Analysis
Individual Plans (Employee only)
Assumptions and Notes:
In-network procedures
All medical bills are paid pre-tax (via either HSAs or FSAs).
These cost calcuations do NOT take into account prescription drugs (which is something the PPO plans tend to be superior in).
Plan Details
End of explanation
# For the purposes of this estimation, we are assuming the deductible
# is always larger than the HSA contribution amount
def apply_deductible_and_hsa(cost, deductible, hsa):
cost_to_you = 0
cost_remaining = 0
# Apply HSA
deductible_minus_hsa = deductible - hsa
if cost <= hsa:
cost_to_you = 0
cost_remaining = 0
elif cost <= deductible:
cost_to_you = cost - hsa
cost_remaining = 0
elif cost > deductible:
cost_to_you = deductible_minus_hsa
cost_remaining = cost - deductible
return (cost_to_you, cost_remaining)
def apply_coinsurance(cost, coinsurance_rate):
return cost * coinsurance_rate
def apply_oopmax(cost, oopmax):
if cost >= oopmax:
return oopmax
else:
return cost
def setup_graph(title='', x_label='', y_label='', fig_size=None):
fig = plt.figure()
if fig_size != None:
fig.set_size_inches(fig_size[0], fig_size[1])
ax = fig.add_subplot(111)
ax.set_title(title)
ax.set_xlabel(x_label)
ax.set_ylabel(y_label)
Explanation: Helper functions
End of explanation
def individual_cost(plan, gross_cost):
(cost_to_you, cost_remaining) = apply_deductible_and_hsa(gross_cost,
plan.individual_deductible,
plan.hsa_contribution)
cost_to_you += apply_coinsurance(cost_remaining, plan.coinsurance_rate)
cost_to_you = apply_oopmax(cost_to_you, plan.individual_oopmax)
# Apply yearly premiums - note that the out-of-pocket max doesn't include
# the premiums; thus, we apply them after applying out-of-pocket max.
cost_to_you += (plan.premium_monthly * 12)
return cost_to_you
def family_cost(plan, gross_cost):
(cost_to_you, cost_remaining) = apply_deductible_and_hsa(gross_cost,
plan.family_deductible,
plan.hsa_contribution)
cost_to_you += apply_coinsurance(cost_remaining, plan.coinsurance_rate)
cost_to_you = apply_oopmax(cost_to_you, plan.family_oopmax)
# Apply yearly premiums - note that the out-of-pocket max doesn't include
# the premiums; thus, we apply them after applying out-of-pocket max.
cost_to_you += (plan.premium_monthly * 12)
return cost_to_you
Explanation: Plan cost functions
End of explanation
# Should be the monthly premium times 12 (to make up the yearly premium).
family_cost(p1, 0)
p1.premium_monthly * 12.0
family_cost(p2, 0)
p2.premium_monthly * 12.0
family_cost(p3, 0)
p3.premium_monthly * 12.0
Explanation: Sanity Tests
Zero costs
End of explanation
(p1.premium_monthly * 12) + \
(p1.family_deductible - p1.hsa_contribution) + \
(6000 - p1.family_deductible) * p1.coinsurance_rate
Explanation: Cost greater than HSA and deductible
End of explanation
# Calculate costs
gross_costs = range(0, 40000)
p1_costs = [individual_cost(p1, cost) for cost in gross_costs]
p2_costs = [individual_cost(p2, cost) for cost in gross_costs]
p3_costs = [individual_cost(p3, cost) for cost in gross_costs]
# Do graph
setup_graph(title='Individual costs', x_label='Gross cost', y_label='Cost to you', fig_size=(12,7))
ax = plt.subplot(1,1,1)
p1_graph, = ax.plot(gross_costs, p1_costs, label="Cigna HDHP/HSA")
p2_graph, = ax.plot(gross_costs, p2_costs, label="Cigna PPO $1000")
p3_graph, = ax.plot(gross_costs, p3_costs, label="Cigna PPO $500")
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles, labels, loc='upper left')
plt.show()
Explanation: Individual Cost
End of explanation |
8,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Heteroscedastic Regression
Updated on 27th November 2015
by Ricardo Andrade
In this Ipython Notebook we will look at how to implement a GP regression with different noise terms using GPy.
$\bf N.B.
Step1: As an example we will use the following function, which has a peak around 0.
Step2: We will draw some points $( {\bf x},y)$ from the function above, and add some noise on $y$.
Step3: We will use a combination of an MLP and Bias kernels. Although other kernels can be used as well.
Step4: For the moment, we will assume that we already know the error on each observation.
To call the model we just need to run these lines.
Step5: In the following example we show how the magnitud of the noise of a specific observation modifines the model fit.
Step6: Scroll the bar to see how the GP fitted changes.
Step7: If we set all the noise terms to be equal, then we have just a homoscedastic GP regression model. The code below shows a comparison between the heteroscedastic and the homoscedastic models when the noise terms are fixed to the same value.
Step8: We can also learn the noise for each observation.
In this case I found useful to set a lower bound on the noise terms of $10^{-6}$ or to add a white noise kenel.
In this case, I found useful to add a white noise kernel.
Step9: Predictions of $y$ at new points $\bf x$ would need a estimate of the noise term, however we just have those for the training set. At the moment we don't have a rotine to estimate heteroscedastic noise at new points. Estimates for the GP at new poinst are still available using the following command
Step10: plot density
The new GPy release allows us to plot the density of the GP more fine-grained. This is shown below | Python Code:
import numpy as np
import pylab as pb
import GPy
%pylab inline
Explanation: Heteroscedastic Regression
Updated on 27th November 2015
by Ricardo Andrade
In this Ipython Notebook we will look at how to implement a GP regression with different noise terms using GPy.
$\bf N.B.:$ There is currently no implementation to predict the noise for outputs that are not part of the training set.
Usually, a GP regression model assumes that a set of targets ${ y_1,\ldots,y_n }$ is related to a set of inputs ${ {\bf x }_1,\ldots, {\bf x }_n }$ through the relation:
$$ y_i = f({\bf x}_i) + \epsilon_i, $$
where $f \sim \mathcal{GP}$ and $\epsilon_i \sim \mathcal{N}(0,\sigma^2)$ for all $i$.. An heteroscedastic model works in the same way, but allows different variances for the noise terms of the observations, i.e., $\epsilon_i \sim \mathcal{N}(0,\sigma_i^2)$. By adding this assumption, the model will now give different weights to each observation. Hence, it will try to fit better those observations with smaller noise and will be free not to fit very well those observations with larger noise.
Before using GPy, we need to perform some setup.
End of explanation
def f(X):
return 10. + .1*X + 2*np.sin(X)/X
fig,ax = pb.subplots()
ax.plot(np.linspace(-15,25),f(np.linspace(-10,20)),'r-')
ax.grid()
Explanation: As an example we will use the following function, which has a peak around 0.
End of explanation
X = np.random.uniform(-10,20, 50)
X = X[~np.logical_and(X>-2,X<3)] #Remove points between -2 and 3 (just for illustration)
X = np.hstack([np.random.uniform(-1,1,1),X]) #Prepend a point between -1 and 1 (just for illustration)
error = np.random.normal(0,.2,X.size)
Y = f(X) + error
fig,ax = pb.subplots()
ax.plot(np.linspace(-15,25),f(np.linspace(-10,20)),'r-')
ax.plot(X,Y,'kx',mew=1.5)
ax.grid()
Explanation: We will draw some points $( {\bf x},y)$ from the function above, and add some noise on $y$.
End of explanation
kern = GPy.kern.MLP(1) + GPy.kern.Bias(1)
Explanation: We will use a combination of an MLP and Bias kernels. Although other kernels can be used as well.
End of explanation
m = GPy.models.GPHeteroscedasticRegression(X[:,None],Y[:,None],kern)
m['.*het_Gauss.variance'] = abs(error)[:,None] #Set the noise parameters to the error in Y
m.het_Gauss.variance.fix() #We can fix the noise term, since we already know it
m.optimize()
m.plot_f() #Show the predictive values of the GP.
pb.errorbar(X,Y,yerr=np.array(m.likelihood.flattened_parameters).flatten(),fmt=None,ecolor='r',zorder=1)
pb.grid()
pb.plot(X,Y,'kx',mew=1.5)
Explanation: For the moment, we will assume that we already know the error on each observation.
To call the model we just need to run these lines.
End of explanation
def noise_effect(noise):
m.het_Gauss.variance[:1] = noise
m.het_Gauss.variance.fix()
m.optimize()
m.plot_f()
pb.errorbar(X.flatten(),Y.flatten(),yerr=np.array(m.likelihood.flattened_parameters).flatten(),fmt=None,ecolor='r',zorder=1)
pb.plot(X[1:],Y[1:],'kx',mew=1.5)
pb.plot(X[:1],Y[:1],'ko',mew=.5)
pb.grid()
Explanation: In the following example we show how the magnitud of the noise of a specific observation modifines the model fit.
End of explanation
from IPython.html.widgets import *
interact(noise_effect, noise=(0.1,2.))
Explanation: Scroll the bar to see how the GP fitted changes.
End of explanation
#Heteroscedastic model
m1 = GPy.models.GPHeteroscedasticRegression(X[:,None],Y[:,None],kern)
m1.het_Gauss.variance = .05
m1.het_Gauss.variance.fix()
m1.optimize()
# Homoscedastic model
m2 = GPy.models.GPRegression(X[:,None],Y[:,None],kern)
m2['.*Gaussian_noise'] = .05
m2['.*noise'].fix()
m2.optimize()
m1.plot_f()
pb.title('Homoscedastic model')
m2.plot_f()
pb.title('Heteroscedastic model')
print "Kernel parameters (optimized) in the heteroscedastic model"
print m1.kern
print "\nKernel parameters (optimized) in the homoscedastic model"
print m2.kern
Explanation: If we set all the noise terms to be equal, then we have just a homoscedastic GP regression model. The code below shows a comparison between the heteroscedastic and the homoscedastic models when the noise terms are fixed to the same value.
End of explanation
kern = GPy.kern.MLP(1) + GPy.kern.Bias(1)
m = GPy.models.GPHeteroscedasticRegression(X[:,None],Y[:,None],kern)
m.optimize()
fig, ax = plt.subplots(1,1,figsize=(13,5))
m.plot_f(ax=ax)
m.plot_data(ax=ax)
m.plot_errorbars_trainset(ax=ax, alpha=1)
fig.tight_layout()
pb.grid()
Explanation: We can also learn the noise for each observation.
In this case I found useful to set a lower bound on the noise terms of $10^{-6}$ or to add a white noise kenel.
In this case, I found useful to add a white noise kernel.
End of explanation
mu, var = m._raw_predict(m.X)
Explanation: Predictions of $y$ at new points $\bf x$ would need a estimate of the noise term, however we just have those for the training set. At the moment we don't have a rotine to estimate heteroscedastic noise at new points. Estimates for the GP at new poinst are still available using the following command:
End of explanation
fig, ax = plt.subplots(1,1,figsize=(13,5))
m.plot_f(ax=ax, plot_density=True)
m.plot_data(ax=ax)
m.plot_errorbars_trainset(ax=ax, alpha=1)
fig.tight_layout()
pb.grid()
Explanation: plot density
The new GPy release allows us to plot the density of the GP more fine-grained. This is shown below:
End of explanation |
8,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Linear Mixed-Effect Regression in {TF Probability, R, Stan}
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step3: 2 Hierarchical Linear Model
For our comparison between R, Stan, and TFP, we will fit a Hierarchical Linear Model (HLM) to the Radon dataset made popular in Bayesian Data Analysis by Gelman, et. al. (page 559, second ed; page 250, third ed.).
We assume the following generative model
Step4: 3.1 Know Thy Data
In this section we explore the radon dataset to get a better sense of why the proposed model might be reasonable.
Step5: Conclusions
Step6: 5 HLM In Stan
In this section we use rstanarm to fit a Stan model using the same formula/syntax as the lme4 model above.
Unlike lme4 and the TF model below, rstanarm is a fully Bayesian model, i.e., all parameters are presumed drawn from a Normal distribution with parameters themselves drawn from a distribution.
NOTE
Step7: Note
Step8: Note
Step9: Retrieve the point estimates and conditional standard deviations for the group random effects from lme4 for visualization later.
Step10: Draw samples for the county weights using the lme4 estimated means and standard deviations.
Step11: We also retrieve the posterior samples of the county weights from the Stan fit.
Step12: This Stan example shows how one would implement LMER in a style closer to TFP, i.e., by directly specifying the probabilistic model.
6 HLM In TF Probability
In this section we will use low-level TensorFlow Probability primitives (Distributions) to specify our Hierarchical Linear Model as well as fit the unkown parameters.
Step13: 6.1 Specify Model
In this section we specify the radon linear mixed-effect model using TFP primitives. To do this, we specify two functions which produce two TFP distributions
Step15: The following function constructs our prior, $p(\beta|\sigma_C)$ where $\beta$ denotes the random-effect weights and $\sigma_C$ the standard deviation.
We use tf.make_template to ensure that the first call to this function instantiates the TF variables it uses and all subsequent calls reuse the variable's current value.
Step16: The following function constructs our likelihood, $p(y|x,\omega,\beta,\sigma_N)$ where $y,x$ denote response and evidence, $\omega,\beta$ denote fixed- and random-effect weights, and $\sigma_N$ the standard deviation.
Here again we use tf.make_template to ensure the TF variables are reused across calls.
Step17: Finally we use the prior and likelihood generators to construct the joint log-density.
Step18: 6.2 Training (Stochastic Approximation of Expectation Maximization)
To fit our linear mixed-effect regression model, we will use a stochastic approximation version of the Expectation Maximization algorithm (SAEM). The basic idea is to use samples from the posterior to approximate the expected joint log-density (E-step). Then we find the parameters which maximize this calculation (M-step). Somewhat more concretely, the fixed-point iteration is given by
Step19: We now complete the E-step setup by creating an HMC transition kernel.
Notes
Step20: We now set-up the M-step. This is essentially the same as an optimization one might do in TF.
Step21: We conclude with some housekeeping tasks. We must tell TF that all variables are initialized. We also create handles to our TF variables so we can print their values at each iteration of the procedure.
Step22: 6.3 Execute
In this section we execute our SAEM TF graph. The main trick here is to feed our last draw from the HMC kernel into the next iteration. This is achieved through our use of feed_dict in the sess.run call.
Step23: Looks like after ~1500 steps, our estimates of the parameters have stabilized.
6.4 Results
Now that we've fit the parameters, let's generate a large number of posterior samples and study the results.
Step24: We now construct a box and whisker diagram of the $\beta_c \log(\text{UraniumPPM}_c)$ random-effect. We'll order the random-effects by decreasing county frequency.
Step25: From this box and whisker diagram, we observe that the variance of the county-level $\log(\text{UraniumPPM})$ random-effect increases as the county is less represented in the dataset. Intutively this makes sense--we should be less certain about the impact of a certain county if we have less evidence for it.
7 Side-by-Side-by-Side Comparison
We now compare the results of all three procedures. To do this, we will compute non-parameteric estimates of the posterior samples as generated by Stan and TFP. We will also compare against the parameteric (approximate) estimates produced by R's lme4 package.
The following plot depicts the posterior distribution of each weight for each county in Minnesota. We show results for Stan (red), TFP (blue), and R's lme4 (orange). We shade results from Stan and TFP thus expect to see purple when the two agree. For simplicity we do not shade results from R. Each subplot represents a single county and are ordered in descending frequency in raster scan order (i.e., from left-to-right then top-to-bottom). | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
%matplotlib inline
import os
from six.moves import urllib
import numpy as np
import pandas as pd
import warnings
from matplotlib import pyplot as plt
import seaborn as sns
from IPython.core.pylabtools import figsize
figsize(11, 9)
import tensorflow.compat.v1 as tf
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
Explanation: Linear Mixed-Effect Regression in {TF Probability, R, Stan}
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/HLM_TFP_R_Stan"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/HLM_TFP_R_Stan.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
1 Introduction
In this colab we will fit a linear mixed-effect regression model to a popular, toy dataset. We will make this fit thrice, using R's lme4, Stan's mixed-effects package, and TensorFlow Probability (TFP) primitives. We conclude by showing all three give roughly the same fitted parameters and posterior distributions.
Our main conclusion is that TFP has the general pieces necessary to fit HLM-like models and that it produces results which are consistent with other software packages, i.e.., lme4, rstanarm. This colab is not an accurate reflection of the computational efficiency of any of the packages compared.
End of explanation
def load_and_preprocess_radon_dataset(state='MN'):
Preprocess Radon dataset as done in "Bayesian Data Analysis" book.
We filter to Minnesota data (919 examples) and preprocess to obtain the
following features:
- `log_uranium_ppm`: Log of soil uranium measurements.
- `county`: Name of county in which the measurement was taken.
- `floor`: Floor of house (0 for basement, 1 for first floor) on which the
measurement was taken.
The target variable is `log_radon`, the log of the Radon measurement in the
house.
ds = tfds.load('radon', split='train')
radon_data = tfds.as_dataframe(ds)
radon_data.rename(lambda s: s[9:] if s.startswith('feat') else s, axis=1, inplace=True)
df = radon_data[radon_data.state==state.encode()].copy()
# For any missing or invalid activity readings, we'll use a value of `0.1`.
df['radon'] = df.activity.apply(lambda x: x if x > 0. else 0.1)
# Make county names look nice.
df['county'] = df.county.apply(lambda s: s.decode()).str.strip().str.title()
# Remap categories to start from 0 and end at max(category).
county_name = sorted(df.county.unique())
df['county'] = df.county.astype(
pd.api.types.CategoricalDtype(categories=county_name)).cat.codes
county_name = list(map(str.strip, county_name))
df['log_radon'] = df['radon'].apply(np.log)
df['log_uranium_ppm'] = df['Uppm'].apply(np.log)
df = df[['idnum', 'log_radon', 'floor', 'county', 'log_uranium_ppm']]
return df, county_name
radon, county_name = load_and_preprocess_radon_dataset()
# We'll use the following directory to store our preprocessed dataset.
CACHE_DIR = os.path.join(os.sep, 'tmp', 'radon')
# Save processed data. (So we can later read it in R.)
if not tf.gfile.Exists(CACHE_DIR):
tf.gfile.MakeDirs(CACHE_DIR)
with tf.gfile.Open(os.path.join(CACHE_DIR, 'radon.csv'), 'w') as f:
radon.to_csv(f, index=False)
Explanation: 2 Hierarchical Linear Model
For our comparison between R, Stan, and TFP, we will fit a Hierarchical Linear Model (HLM) to the Radon dataset made popular in Bayesian Data Analysis by Gelman, et. al. (page 559, second ed; page 250, third ed.).
We assume the following generative model:
$$\begin{align}
\text{for } & c=1\ldots \text{NumCounties}:\
& \beta_c \sim \text{Normal}\left(\text{loc}=0, \text{scale}=\sigma_C \right) \
\text{for } & i=1\ldots \text{NumSamples}:\
&\eta_i = \underbrace{\omega_0 + \omega_1 \text{Floor}i}\text{fixed effects} + \underbrace{\beta_{ \text{County}i} \log( \text{UraniumPPM}{\text{County}i}))}\text{random effects} \
&\log(\text{Radon}_i) \sim \text{Normal}(\text{loc}=\eta_i , \text{scale}=\sigma_N)
\end{align}$$
In R's lme4 "tilde notation", this model is equivalent to:
log_radon ~ 1 + floor + (0 + log_uranium_ppm | county)
We will find MLE for $\omega, \sigma_C, \sigma_N$ using the posterior distribution (conditioned on evidence) of ${\beta_c}_{c=1}^\text{NumCounties}$.
For essentially the same model but with a random intercept, see Appendix A.
For a more general specification of HLMs, see Appendix B.
3 Data Munging
In this section we obtain the radon dataset and do some minimal preprocessing to make it comply with our assumed model.
End of explanation
radon.head()
fig, ax = plt.subplots(figsize=(22, 5));
county_freq = radon['county'].value_counts()
county_freq.plot(kind='bar', color='#436bad');
plt.xlabel('County index')
plt.ylabel('Number of radon readings')
plt.title('Number of radon readings per county', fontsize=16)
county_freq = np.array(zip(county_freq.index, county_freq.values)) # We'll use this later.
fig, ax = plt.subplots(ncols=2, figsize=[10, 4]);
radon['log_radon'].plot(kind='density', ax=ax[0]);
ax[0].set_xlabel('log(radon)')
radon['floor'].value_counts().plot(kind='bar', ax=ax[1]);
ax[1].set_xlabel('Floor');
ax[1].set_ylabel('Count');
fig.subplots_adjust(wspace=0.25)
Explanation: 3.1 Know Thy Data
In this section we explore the radon dataset to get a better sense of why the proposed model might be reasonable.
End of explanation
suppressMessages({
library('bayesplot')
library('data.table')
library('dplyr')
library('gfile')
library('ggplot2')
library('lattice')
library('lme4')
library('plyr')
library('rstanarm')
library('tidyverse')
RequireInitGoogle()
})
data = read_csv(gfile::GFile('/tmp/radon/radon.csv'))
head(data)
# https://github.com/stan-dev/example-models/wiki/ARM-Models-Sorted-by-Chapter
radon.model <- lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
summary(radon.model)
qqmath(ranef(radon.model, condVar=TRUE))
write.csv(as.data.frame(ranef(radon.model, condVar = TRUE)), '/tmp/radon/lme4_fit.csv')
Explanation: Conclusions:
- There's a long tail of 85 counties. (A common occurrence in GLMMs.)
- Indeed $\log(\text{Radon})$ is unconstrained. (So linear regression might make sense.)
- Readings are most made on the $0$-th floor; no reading was made above floor $1$. (So our fixed effects will only have two weights.)
4 HLM In R
In this section we use R's lme4 package to fit probabilistic model described above.
NOTE: To execute this section, you must switch to an R colab runtime.
End of explanation
fit <- stan_lmer(log_radon ~ 1 + floor + (0 + log_uranium_ppm | county), data = data)
Explanation: 5 HLM In Stan
In this section we use rstanarm to fit a Stan model using the same formula/syntax as the lme4 model above.
Unlike lme4 and the TF model below, rstanarm is a fully Bayesian model, i.e., all parameters are presumed drawn from a Normal distribution with parameters themselves drawn from a distribution.
NOTE: To execute this section, you must switch an R colab runtime.
End of explanation
fit
color_scheme_set("red")
ppc_dens_overlay(y = fit$y,
yrep = posterior_predict(fit, draws = 50))
color_scheme_set("brightblue")
ppc_intervals(
y = data$log_radon,
yrep = posterior_predict(fit),
x = data$county,
prob = 0.8
) +
labs(
x = "County",
y = "log radon",
title = "80% posterior predictive intervals \nvs observed log radon",
subtitle = "by county"
) +
panel_bg(fill = "gray95", color = NA) +
grid_lines(color = "white")
# Write the posterior samples (4000 for each variable) to a CSV.
write.csv(tidy(as.matrix(fit)), "/tmp/radon/stan_fit.csv")
Explanation: Note: The runtimes are from a single CPU core. (This colab is not intended to be a faithful representation of Stan or TFP runtime.)
End of explanation
with tf.gfile.Open('/tmp/radon/lme4_fit.csv', 'r') as f:
lme4_fit = pd.read_csv(f, index_col=0)
lme4_fit.head()
Explanation: Note: Switch back to the Python TF kernel runtime.
End of explanation
posterior_random_weights_lme4 = np.array(lme4_fit.condval, dtype=np.float32)
lme4_prior_scale = np.array(lme4_fit.condsd, dtype=np.float32)
print(posterior_random_weights_lme4.shape, lme4_prior_scale.shape)
Explanation: Retrieve the point estimates and conditional standard deviations for the group random effects from lme4 for visualization later.
End of explanation
with tf.Session() as sess:
lme4_dist = tfp.distributions.Independent(
tfp.distributions.Normal(
loc=posterior_random_weights_lme4,
scale=lme4_prior_scale),
reinterpreted_batch_ndims=1)
posterior_random_weights_lme4_final_ = sess.run(lme4_dist.sample(4000))
posterior_random_weights_lme4_final_.shape
Explanation: Draw samples for the county weights using the lme4 estimated means and standard deviations.
End of explanation
with tf.gfile.Open('/tmp/radon/stan_fit.csv', 'r') as f:
samples = pd.read_csv(f, index_col=0)
samples.head()
posterior_random_weights_cols = [
col for col in samples.columns if 'b.log_uranium_ppm.county' in col
]
posterior_random_weights_final_stan = samples[
posterior_random_weights_cols].values
print(posterior_random_weights_final_stan.shape)
Explanation: We also retrieve the posterior samples of the county weights from the Stan fit.
End of explanation
# Handy snippet to reset the global graph and global session.
with warnings.catch_warnings():
warnings.simplefilter('ignore')
tf.reset_default_graph()
try:
sess.close()
except:
pass
sess = tf.InteractiveSession()
Explanation: This Stan example shows how one would implement LMER in a style closer to TFP, i.e., by directly specifying the probabilistic model.
6 HLM In TF Probability
In this section we will use low-level TensorFlow Probability primitives (Distributions) to specify our Hierarchical Linear Model as well as fit the unkown parameters.
End of explanation
inv_scale_transform = lambda y: np.log(y) # Not using TF here.
fwd_scale_transform = tf.exp
Explanation: 6.1 Specify Model
In this section we specify the radon linear mixed-effect model using TFP primitives. To do this, we specify two functions which produce two TFP distributions:
- make_weights_prior: A multivariate Normal prior for the random weights (which are multiplied by $\log(\text{UraniumPPM}_{c_i})$ to compue the linear predictor).
- make_log_radon_likelihood: A batch of Normal distributions over each observed $\log(\text{Radon}_i)$ dependent variable.
Since we will be fitting the parameters of each of these distributions we must use TF variables (i.e., tf.get_variable). However, since we wish to use unconstrained optimzation we must find a way to constrain real-values to achieve the necessary semantics, eg, postives which represent standard deviations.
End of explanation
def _make_weights_prior(num_counties, dtype):
Returns a `len(log_uranium_ppm)` batch of univariate Normal.
raw_prior_scale = tf.get_variable(
name='raw_prior_scale',
initializer=np.array(inv_scale_transform(1.), dtype=dtype))
return tfp.distributions.Independent(
tfp.distributions.Normal(
loc=tf.zeros(num_counties, dtype=dtype),
scale=fwd_scale_transform(raw_prior_scale)),
reinterpreted_batch_ndims=1)
make_weights_prior = tf.make_template(
name_='make_weights_prior', func_=_make_weights_prior)
Explanation: The following function constructs our prior, $p(\beta|\sigma_C)$ where $\beta$ denotes the random-effect weights and $\sigma_C$ the standard deviation.
We use tf.make_template to ensure that the first call to this function instantiates the TF variables it uses and all subsequent calls reuse the variable's current value.
End of explanation
def _make_log_radon_likelihood(random_effect_weights, floor, county,
log_county_uranium_ppm, init_log_radon_stddev):
raw_likelihood_scale = tf.get_variable(
name='raw_likelihood_scale',
initializer=np.array(
inv_scale_transform(init_log_radon_stddev), dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', initializer=np.array([0., 1.], dtype=dtype))
fixed_effects = fixed_effect_weights[0] + fixed_effect_weights[1] * floor
random_effects = tf.gather(
random_effect_weights * log_county_uranium_ppm,
indices=tf.to_int32(county),
axis=-1)
linear_predictor = fixed_effects + random_effects
return tfp.distributions.Normal(
loc=linear_predictor, scale=fwd_scale_transform(raw_likelihood_scale))
make_log_radon_likelihood = tf.make_template(
name_='make_log_radon_likelihood', func_=_make_log_radon_likelihood)
Explanation: The following function constructs our likelihood, $p(y|x,\omega,\beta,\sigma_N)$ where $y,x$ denote response and evidence, $\omega,\beta$ denote fixed- and random-effect weights, and $\sigma_N$ the standard deviation.
Here again we use tf.make_template to ensure the TF variables are reused across calls.
End of explanation
def joint_log_prob(random_effect_weights, log_radon, floor, county,
log_county_uranium_ppm, dtype):
num_counties = len(log_county_uranium_ppm)
rv_weights = make_weights_prior(num_counties, dtype)
rv_radon = make_log_radon_likelihood(
random_effect_weights,
floor,
county,
log_county_uranium_ppm,
init_log_radon_stddev=radon.log_radon.values.std())
return (rv_weights.log_prob(random_effect_weights)
+ tf.reduce_sum(rv_radon.log_prob(log_radon), axis=-1))
Explanation: Finally we use the prior and likelihood generators to construct the joint log-density.
End of explanation
# Specify unnormalized posterior.
dtype = np.float32
log_county_uranium_ppm = radon[
['county', 'log_uranium_ppm']].drop_duplicates().values[:, 1]
log_county_uranium_ppm = log_county_uranium_ppm.astype(dtype)
def unnormalized_posterior_log_prob(random_effect_weights):
return joint_log_prob(
random_effect_weights=random_effect_weights,
log_radon=dtype(radon.log_radon.values),
floor=dtype(radon.floor.values),
county=np.int32(radon.county.values),
log_county_uranium_ppm=log_county_uranium_ppm,
dtype=dtype)
Explanation: 6.2 Training (Stochastic Approximation of Expectation Maximization)
To fit our linear mixed-effect regression model, we will use a stochastic approximation version of the Expectation Maximization algorithm (SAEM). The basic idea is to use samples from the posterior to approximate the expected joint log-density (E-step). Then we find the parameters which maximize this calculation (M-step). Somewhat more concretely, the fixed-point iteration is given by:
$$\begin{align}
\text{E}[ \log p(x, Z | \theta) | \theta_0]
&\approx \frac{1}{M} \sum_{m=1}^M \log p(x, z_m | \theta), \quad Z_m\sim p(Z | x, \theta_0) && \text{E-step}\
&=: Q_M(\theta, \theta_0) \
\theta_0 &= \theta_0 - \eta \left.\nabla_\theta Q_M(\theta, \theta_0)\right|_{\theta=\theta_0} && \text{M-step}
\end{align}$$
where $x$ denotes evidence, $Z$ some latent variable which needs to be marginalized out, and $\theta,\theta_0$ possible parameterizations.
For a more thorough explanation, see Convergence of a stochastic approximation version of the EM algorithms by Bernard Delyon, Marc Lavielle, Eric, Moulines (Ann. Statist., 1999).
To compute the E-step, we need to sample from the posterior. Since our posterior is not easy to sample from, we use Hamiltonian Monte Carlo (HMC). HMC is a Monte Carlo Markov Chain procedure which uses gradients (wrt state, not parameters) of the unnormalized posterior log-density to propose new samples.
Specifying the unnormalized posterior log-density is simple--it is merely the joint log-density "pinned" at whatever we wish to condition on.
End of explanation
# Set-up E-step.
step_size = tf.get_variable(
'step_size',
initializer=np.array(0.2, dtype=dtype),
trainable=False)
hmc = tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size,
step_size_update_fn=tfp.mcmc.make_simple_step_size_update_policy(
num_adaptation_steps=None),
state_gradients_are_stopped=True)
init_random_weights = tf.placeholder(dtype, shape=[len(log_county_uranium_ppm)])
posterior_random_weights, kernel_results = tfp.mcmc.sample_chain(
num_results=3,
num_burnin_steps=0,
num_steps_between_results=0,
current_state=init_random_weights,
kernel=hmc)
Explanation: We now complete the E-step setup by creating an HMC transition kernel.
Notes:
We use state_stop_gradient=Trueto prevent the M-step from backpropping through draws from the MCMC. (Recall, we needn't backprop through because our E-step is intentionally parameterized at the previous best known estimators.)
We use tf.placeholder so that when we eventually execute our TF graph, we can feed the previous iteration's random MCMC sample as the the next iteration's chain's value.
We use TFP's adaptive step_size heuristic, tfp.mcmc.hmc_step_size_update_fn.
End of explanation
# Set-up M-step.
loss = -tf.reduce_mean(kernel_results.accepted_results.target_log_prob)
global_step = tf.train.get_or_create_global_step()
learning_rate = tf.train.exponential_decay(
learning_rate=0.1,
global_step=global_step,
decay_steps=2,
decay_rate=0.99)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss, global_step=global_step)
Explanation: We now set-up the M-step. This is essentially the same as an optimization one might do in TF.
End of explanation
# Initialize all variables.
init_op = tf.initialize_all_variables()
# Grab variable handles for diagnostic purposes.
with tf.variable_scope('make_weights_prior', reuse=True):
prior_scale = fwd_scale_transform(tf.get_variable(
name='raw_prior_scale', dtype=dtype))
with tf.variable_scope('make_log_radon_likelihood', reuse=True):
likelihood_scale = fwd_scale_transform(tf.get_variable(
name='raw_likelihood_scale', dtype=dtype))
fixed_effect_weights = tf.get_variable(
name='fixed_effect_weights', dtype=dtype)
Explanation: We conclude with some housekeeping tasks. We must tell TF that all variables are initialized. We also create handles to our TF variables so we can print their values at each iteration of the procedure.
End of explanation
init_op.run()
w_ = np.zeros([len(log_county_uranium_ppm)], dtype=dtype)
%%time
maxiter = int(1500)
num_accepted = 0
num_drawn = 0
for i in range(maxiter):
[
_,
global_step_,
loss_,
posterior_random_weights_,
kernel_results_,
step_size_,
prior_scale_,
likelihood_scale_,
fixed_effect_weights_,
] = sess.run([
train_op,
global_step,
loss,
posterior_random_weights,
kernel_results,
step_size,
prior_scale,
likelihood_scale,
fixed_effect_weights,
], feed_dict={init_random_weights: w_})
w_ = posterior_random_weights_[-1, :]
num_accepted += kernel_results_.is_accepted.sum()
num_drawn += kernel_results_.is_accepted.size
acceptance_rate = num_accepted / num_drawn
if i % 100 == 0 or i == maxiter - 1:
print('global_step:{:>4} loss:{: 9.3f} acceptance:{:.4f} '
'step_size:{:.4f} prior_scale:{:.4f} likelihood_scale:{:.4f} '
'fixed_effect_weights:{}'.format(
global_step_, loss_.mean(), acceptance_rate, step_size_,
prior_scale_, likelihood_scale_, fixed_effect_weights_))
Explanation: 6.3 Execute
In this section we execute our SAEM TF graph. The main trick here is to feed our last draw from the HMC kernel into the next iteration. This is achieved through our use of feed_dict in the sess.run call.
End of explanation
%%time
posterior_random_weights_final, kernel_results_final = tfp.mcmc.sample_chain(
num_results=int(15e3),
num_burnin_steps=int(1e3),
current_state=init_random_weights,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=unnormalized_posterior_log_prob,
num_leapfrog_steps=2,
step_size=step_size))
[
posterior_random_weights_final_,
kernel_results_final_,
] = sess.run([
posterior_random_weights_final,
kernel_results_final,
], feed_dict={init_random_weights: w_})
print('prior_scale: ', prior_scale_)
print('likelihood_scale: ', likelihood_scale_)
print('fixed_effect_weights: ', fixed_effect_weights_)
print('acceptance rate final: ', kernel_results_final_.is_accepted.mean())
Explanation: Looks like after ~1500 steps, our estimates of the parameters have stabilized.
6.4 Results
Now that we've fit the parameters, let's generate a large number of posterior samples and study the results.
End of explanation
x = posterior_random_weights_final_ * log_county_uranium_ppm
I = county_freq[:, 0]
x = x[:, I]
cols = np.array(county_name)[I]
pw = pd.DataFrame(x)
pw.columns = cols
fig, ax = plt.subplots(figsize=(25, 4))
ax = pw.boxplot(rot=80, vert=True);
Explanation: We now construct a box and whisker diagram of the $\beta_c \log(\text{UraniumPPM}_c)$ random-effect. We'll order the random-effects by decreasing county frequency.
End of explanation
nrows = 17
ncols = 5
fig, ax = plt.subplots(nrows, ncols, figsize=(18, 21), sharey=True, sharex=True)
with warnings.catch_warnings():
warnings.simplefilter('ignore')
ii = -1
for r in range(nrows):
for c in range(ncols):
ii += 1
idx = county_freq[ii, 0]
sns.kdeplot(
posterior_random_weights_final_[:, idx] * log_county_uranium_ppm[idx],
color='blue',
alpha=.3,
shade=True,
label='TFP',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_final_stan[:, idx] *
log_county_uranium_ppm[idx],
color='red',
alpha=.3,
shade=True,
label='Stan/rstanarm',
ax=ax[r][c])
sns.kdeplot(
posterior_random_weights_lme4_final_[:, idx] *
log_county_uranium_ppm[idx],
color='#F4B400',
alpha=.7,
shade=False,
label='R/lme4',
ax=ax[r][c])
ax[r][c].vlines(
posterior_random_weights_lme4[idx] * log_county_uranium_ppm[idx],
0,
5,
color='#F4B400',
linestyle='--')
ax[r][c].set_title(county_name[idx] + ' ({})'.format(idx), y=.7)
ax[r][c].set_ylim(0, 5)
ax[r][c].set_xlim(-1., 1.)
ax[r][c].get_yaxis().set_visible(False)
if ii == 2:
ax[r][c].legend(bbox_to_anchor=(1.4, 1.7), fontsize=20, ncol=3)
else:
ax[r][c].legend_.remove()
fig.subplots_adjust(wspace=0.03, hspace=0.1)
Explanation: From this box and whisker diagram, we observe that the variance of the county-level $\log(\text{UraniumPPM})$ random-effect increases as the county is less represented in the dataset. Intutively this makes sense--we should be less certain about the impact of a certain county if we have less evidence for it.
7 Side-by-Side-by-Side Comparison
We now compare the results of all three procedures. To do this, we will compute non-parameteric estimates of the posterior samples as generated by Stan and TFP. We will also compare against the parameteric (approximate) estimates produced by R's lme4 package.
The following plot depicts the posterior distribution of each weight for each county in Minnesota. We show results for Stan (red), TFP (blue), and R's lme4 (orange). We shade results from Stan and TFP thus expect to see purple when the two agree. For simplicity we do not shade results from R. Each subplot represents a single county and are ordered in descending frequency in raster scan order (i.e., from left-to-right then top-to-bottom).
End of explanation |
8,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Deep Learning activation functions examined below
Step2: 1. ReLU
A great default choice for hidden layers. It is frequently used in industry and is almost always adequete to solve a problem.
Although this graph is not differentiable at z=0, it is not usually a problem in practice since an exact value of 0 is rare. The derivative at z=0 can usually be set to 0 or 1 without a problem.
Step3: 2. Leaky ReLU
Can be better than ReLU, but it is used less often in practice.
It provides a differentiable point at 0 to address the concern mentioned above.
Step4: 3. sigmoid
Almost never used except in output layer when dealing with binary classification. It's most useful feature is that it guarentees an output between 0 and 1.
However, when z is very small or very large, the derivative of the sigmoid function is very small which can slow down gradient descent.
Step5: 4. tanh
This is essentially a shifted version of the sigmoid function which is usually strictly better. The mean of activations is closer to 0 which makes training on centered data easier. tanh is also a great default choice for hidden layers. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
#Create array of possible z values
z = np.linspace(-5,5,num=1000)
def draw_activation_plot(a,quadrants=2,y_ticks=[0],two_quad_y_lim=[0,5], four_quad_y_lim=[-1,1]):
Draws plot of activation function
Parameters
----------
a : Output of activation function over domain z.
quadrants: The number of quadrants in the plot (options: 2 or 4)
y_ticks: Ticks to show on the y-axis.
two_quad_y_lim: The limit of the y axis for 2 quadrant plots.
four_quad_y_lim: The limit of the y axis for 4 quadrant plots.
#Create figure and axis
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
#Move left axis
ax.spines['left'].set_position('center')
#Remove top and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
#Set x and y labels
plt.xlabel('z')
plt.ylabel('a')
#Set ticks
plt.xticks([])
plt.yticks(y_ticks)
#Set ylim
plt.ylim(two_quad_y_lim)
#4 Quadrant conditions
if quadrants==4:
#Move up bottom axis
ax.spines['bottom'].set_position('center')
#Move x and y labels for readability
ax.yaxis.set_label_coords(.48,.75)
ax.xaxis.set_label_coords(.75,.48)
##Set y_lim for 4 quadrant graphs
plt.ylim(four_quad_y_lim)
#Plot z vs. activation function
plt.plot(z,a);
Explanation: Deep Learning activation functions examined below:
1. ReLU
2. Leaky ReLU
3. sigmoid
4. tanh
Activation plotting pleminaries
End of explanation
relu = np.maximum(z,0)
draw_activation_plot(relu)
Explanation: 1. ReLU
A great default choice for hidden layers. It is frequently used in industry and is almost always adequete to solve a problem.
Although this graph is not differentiable at z=0, it is not usually a problem in practice since an exact value of 0 is rare. The derivative at z=0 can usually be set to 0 or 1 without a problem.
End of explanation
leaky_ReLU = np.maximum(0.01*z,z)
draw_activation_plot(leaky_ReLU)
Explanation: 2. Leaky ReLU
Can be better than ReLU, but it is used less often in practice.
It provides a differentiable point at 0 to address the concern mentioned above.
End of explanation
sigmoid = 1/(1+np.exp(-z))
draw_activation_plot(sigmoid,y_ticks=[0,1], two_quad_y_lim=[0,1])
Explanation: 3. sigmoid
Almost never used except in output layer when dealing with binary classification. It's most useful feature is that it guarentees an output between 0 and 1.
However, when z is very small or very large, the derivative of the sigmoid function is very small which can slow down gradient descent.
End of explanation
tanh = (np.exp(z)-np.exp(-z))/(np.exp(z)+np.exp(-z))
draw_activation_plot(tanh,y_ticks=[-1,0,1],quadrants=4)
Explanation: 4. tanh
This is essentially a shifted version of the sigmoid function which is usually strictly better. The mean of activations is closer to 0 which makes training on centered data easier. tanh is also a great default choice for hidden layers.
End of explanation |
8,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The code on the left is a hack to make this notebook two-column. I found it here
Step1: Extending built-in types
Note that multi-inheritence is constrained to one built-in type only. You cannot extend multiple built-in types.
Step2: Slots
Slots can replace the mutable class-dictionary __dict__ by a fixed data structure. __slots__ disallows adding or removing attributes to a class.
Its main purpose is to avoid the need of lots of dictionaries when something simple like int is subclassed.
Basic slot-example | Python Code:
class classA():
pass
a = classA()
print type(a)
class classA(object):
pass
a = classA()
print type(a)
Explanation: The code on the left is a hack to make this notebook two-column. I found it here:
http://stackoverflow.com/questions/23370670/ipython-notebook-put-code-cells-into-columns
Object Oriented Programming (OOP) in Python
Welcome to the OOP-session of our Python course! This notebook introduces Python's OOP-concepts in a two column-style side by side with an equivalent formulation in Java respectively.
We chose Java here, as it is a popular OOP-enabled language that many of you are probably familiar with. Also, we found it helpful to compare Python-style to some other language and Java-OOP is still somewhat easier to read than OOP in C-ish languages.
Basic class with constructor etc
Python
Java
```python
class SpaceShip(SpaceObject):
bgColor = (0, 0, 0, 0)
def __init__(color, position):
super(SpaceShip, self).__init__(
position)
self.color = color
def fly(self, moveVector):
self.position += moveVector
@staticmethod
def get_bgColor():
return SpaceShip.bgColor
```
See https://julien.danjou.info/blog/2013/guide-python-static-class-abstract-methods for a guide about the decorators @staticmethod, @classmethod and abstractmethod.
Classmethods have no equivalent in Java. They are like special static methods that get the class as their initial, implicit argument:
python
@classmethod
def get_bgColor(cls):
return cls.bgColor
```java
public class SpaceShip extends SpaceObject {
public static Color bgColor =
new Color(0, 0, 0, 0);
public Color color;
public SpaceShip(Color col, Vec3D pos) {
super(pos);
color = col;
}
public void fly(Vec3D move) {
position.add(move);
}
public static Color get_bgColor() {
return bgColor;
}
}
```
Abstract classes
```python
from abc import ABCMeta
class Target():
metaclass = ABCMeta
@abstractmethod
def hit(self, strength):
pass
```
java
public interface Target {
public void hit(double strength);
}
//or
public abstract class Target {
public abstract void hit(double strength);
}
Multiple inheritance
```python
class SpaceShip(SpaceObject, Target):
def hit(self, strength):
print "Damn I'm hit."
```
```java
public class SpaceShip extends SpaceObject
implements Target {
public void hit(double strength) {
System.out.println("Damn I'm hit.");
}
}
```
```python
class Hitpoints(Target):
def init(self):
self.hitpoints = 100
def hit(self, strength):
self.hitpoints -= strength
class SpaceShip(SpaceObject, Hitpoints):
def init(self):
Hitpoints.init(self)
super(SpaceShip, self).init()
```
```java
public class HitpointSpaceShip extends
SpaceShip implements Hitpoints {
double hitpoints = 100.0;
}
public interface Hitpoints extends Target {
//Java 8 introduced default-implementations:
default void hit(double strength) {
((HitpointSpaceShip) this).hitpoints -=
strength;
}
}
```
Overloading operators
```python
class Fraction():
def init(self, numerator,denominator):
self.num = numerator
self.den = denominator
def mul(self, other):
return Fraction(self.num * other.num,
self.den * other.den)
```
Overview of magic methods:
http://www.rafekettler.com/magicmethods.html
Task:
Implement numerical and logical magic methods. (How many can you get done in the available time?)
Also consider the idea that numerator and denominator are functions, e.g. numpy.polynomial.polynomial. In this case Fraction shall also act as a function. How can you achieve this?
New-style classes
Classic class:
Original essay about new-style classes by Guido van Rossum:
https://www.python.org/download/releases/2.2.3/descrintro/
New-style class:
End of explanation
class evenInt(int):
def __init__(self, value):
if value % 2 != 0:
raise ValueError(str(value)+
' is not even')
super(evenInt, self).__init__(value)
a = evenInt(24)
b = 9
a+b
class defaultdict(dict):
def __init__(self, default=None):
dict.__init__(self)
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return self.default
a = defaultdict(default=0.0)
print a
a['x1'] = 1
print a['x1']
print a
print a['x2']
a.y = '7'
print a.y
print a.__dict__
Explanation: Extending built-in types
Note that multi-inheritence is constrained to one built-in type only. You cannot extend multiple built-in types.
End of explanation
class defaultdict(dict):
__slots__ = ['default']
def __init__(self, default=None):
dict.__init__(self)
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return self.default
a = defaultdict(default=0.0)
print a
a['x1'] = 1
print a['x1']
print a
print a['x2']
#a.y = '7'
#print a.y
#print a.__dict__
print a.__slots__
class defaultdict(dict):
__slots__ = ['default']
def __init__(self, default=None):
dict.__init__(self)
self.default = default
def __getitem__(self, key):
try:
return dict.__getitem__(self, key)
except KeyError:
return self.default
a = defaultdict(default=0.0)
print a
a['x1'] = 1
print a['x1']
print a
print a['x2']
#a.y = '7'
#print a.y
#print a.__dict__
print a.__slots__
a.__slots__.append('y')
print a.__slots__
a.y = '7'
print a.y
Explanation: Slots
Slots can replace the mutable class-dictionary __dict__ by a fixed data structure. __slots__ disallows adding or removing attributes to a class.
Its main purpose is to avoid the need of lots of dictionaries when something simple like int is subclassed.
Basic slot-example:
You cannot modify slots afterwards (well, you can, but it doesn't add the attribute):
End of explanation |
8,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Preprocessing using Dataflow </h1>
This notebook illustrates
Step1: Run the command again if you are getting oauth2client error.
Note
Step2: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
Step4: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
In this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output
Step5: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step. | Python Code:
pip install --user apache-beam[gcp]
Explanation: <h1> Preprocessing using Dataflow </h1>
This notebook illustrates:
<ol>
<li> Creating datasets for Machine Learning using Dataflow
</ol>
<p>
While Pandas is fine for experimenting, for operationalization of your workflow, it is better to do preprocessing in Apache Beam. This will also help if you need to preprocess data in flight, since Apache Beam also allows for streaming.
End of explanation
import apache_beam as beam
print(beam.__version__)
Explanation: Run the command again if you are getting oauth2client error.
Note: You may ignore the following responses in the cell output above:
ERROR (in Red text) related to: witwidget-gpu, fairing
WARNING (in Yellow text) related to: hdfscli, hdfscli-avro, pbr, fastavro, gen_client
<b>Restart</b> the kernel before proceeding further (On the Notebook menu - <b>Kernel</b> - <b>Restart Kernel<b>).
Make sure the Dataflow API is enabled by going to this link. Ensure that you've installed Beam by importing it and printing the version number.
End of explanation
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
Explanation: You may receive a UserWarning about the Apache Beam SDK for Python 3 as not being yet fully supported. Don't worry about this.
End of explanation
import datetime, os
def to_csv(rowdict):
import hashlib
import copy
# TODO #1:
# Pull columns from BQ and create line(s) of CSV input
CSV_COLUMNS = None
# Create synthetic data where we assume that no ultrasound has been performed
# and so we don't know sex of the baby. Let's assume that we can tell the difference
# between single and multiple, but that the errors rates in determining exact number
# is difficult in the absence of an ultrasound.
no_ultrasound = copy.deepcopy(rowdict)
w_ultrasound = copy.deepcopy(rowdict)
no_ultrasound['is_male'] = 'Unknown'
if rowdict['plurality'] > 1:
no_ultrasound['plurality'] = 'Multiple(2+)'
else:
no_ultrasound['plurality'] = 'Single(1)'
# Change the plurality column to strings
w_ultrasound['plurality'] = ['Single(1)', 'Twins(2)', 'Triplets(3)', 'Quadruplets(4)', 'Quintuplets(5)'][rowdict['plurality'] - 1]
# Write out two rows for each input row, one with ultrasound and one without
for result in [no_ultrasound, w_ultrasound]:
data = ','.join([str(result[k]) if k in result else 'None' for k in CSV_COLUMNS])
key = hashlib.sha224(data.encode('utf-8')).hexdigest() # hash the columns to form a key
yield str('{},{}'.format(data, key))
def preprocess(in_test_mode):
import shutil, os, subprocess
job_name = 'preprocess-babyweight-features' + '-' + datetime.datetime.now().strftime('%y%m%d-%H%M%S')
if in_test_mode:
print('Launching local job ... hang on')
OUTPUT_DIR = './preproc'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
os.makedirs(OUTPUT_DIR)
else:
print('Launching Dataflow job {} ... hang on'.format(job_name))
OUTPUT_DIR = 'gs://{0}/babyweight/preproc/'.format(BUCKET)
try:
subprocess.check_call('gsutil -m rm -r {}'.format(OUTPUT_DIR).split())
except:
pass
options = {
'staging_location': os.path.join(OUTPUT_DIR, 'tmp', 'staging'),
'temp_location': os.path.join(OUTPUT_DIR, 'tmp'),
'job_name': job_name,
'project': PROJECT,
'region': REGION,
'teardown_policy': 'TEARDOWN_ALWAYS',
'no_save_main_session': True,
'num_workers': 4,
'max_num_workers': 5
}
opts = beam.pipeline.PipelineOptions(flags = [], **options)
if in_test_mode:
RUNNER = 'DirectRunner'
else:
RUNNER = 'DataflowRunner'
p = beam.Pipeline(RUNNER, options = opts)
query =
SELECT
weight_pounds,
is_male,
mother_age,
plurality,
gestation_weeks,
FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth
FROM
publicdata.samples.natality
WHERE year > 2000
AND weight_pounds > 0
AND mother_age > 0
AND plurality > 0
AND gestation_weeks > 0
AND month > 0
if in_test_mode:
query = query + ' LIMIT 100'
for step in ['train', 'eval']:
if step == 'train':
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) < 3'.format(query)
else:
selquery = 'SELECT * FROM ({}) WHERE ABS(MOD(hashmonth, 4)) = 3'.format(query)
(p
## TODO Task #2: Modify the Apache Beam pipeline such that the first part of the pipe reads the data from BigQuery
| '{}_read'.format(step) >> None
| '{}_csv'.format(step) >> beam.FlatMap(to_csv)
| '{}_out'.format(step) >> beam.io.Write(beam.io.WriteToText(os.path.join(OUTPUT_DIR, '{}.csv'.format(step))))
)
job = p.run()
if in_test_mode:
job.wait_until_finish()
print("Done!")
# TODO Task #3: Once you have verified that the files produced locally are correct, change in_test_mode to False
# to execute this in Cloud Dataflow
preprocess(in_test_mode = True)
Explanation: <h2> Create ML dataset using Dataflow </h2>
Let's use Cloud Dataflow to read in the BigQuery data, do some preprocessing, and write it out as CSV files.
In this case, I want to do some preprocessing, modifying data so that we can simulate what is known if no ultrasound has been performed.
Note that after you launch this, the actual processing is happening on the cloud. Go to the GCP webconsole to the Dataflow section and monitor the running job. It took about 20 minutes for me.
<p>
If you wish to continue without doing this step, you can copy my preprocessed output:
<pre>
gsutil -m cp -r gs://cloud-training-demos/babyweight/preproc gs://your-bucket/
</pre>
But if you do this, you also have to use my TensorFlow model since yours might expect the fields in a different order
End of explanation
%%bash
gsutil ls gs://${BUCKET}/babyweight/preproc/*-00000*
Explanation: The above step will take 20+ minutes. Go to the GCP web console, navigate to the Dataflow section and <b>wait for the job to finish</b> before you run the following step.
End of explanation |
8,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.ML101.6
Step1: PCA is performed using linear combinations of the original features
using a truncated Singular Value Decomposition of the matrix X so
as to project the data onto a base of the top singular vectors.
If the number of retained components is 2 or 3, PCA can be used
to visualize the dataset.
Step2: Once fitted, the pca model exposes the singular vectors in the components_ attribute
Step3: Other attributes are available as well
Step4: Let us project the iris dataset along those first two dimensions
Step5: PCA normalizes and whitens the data, which means that the data
is now centered on both components with unit variance
Step6: Furthermore, the samples components do no longer carry any linear correlation
Step7: We can visualize the projection using pylab
Step8: Note that this projection was determined without any information about the
labels (represented by the colors)
Step9: This is a 2-dimensional dataset embedded in three dimensions, but it is embedded
in such a way that PCA cannot discover the underlying data orientation
Step10: Manifold learning algorithms, however, available in the sklearn.manifold
submodule, are able to recover the underlying 2-dimensional manifold
Step11: Exercise
Step12: Solution | Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
Explanation: 2A.ML101.6: Unsupervised Learning: Dimensionality Reduction and Visualization
Unsupervised learning is interested in situations in which X is available, but not y: data without labels. A typical use case is to find hiden structure in the data.
Source: Course on machine learning with scikit-learn by Gaël Varoquaux
Dimensionality Reduction: PCA
Dimensionality reduction is the task of deriving a set of new
artificial features that is smaller than the original feature
set while retaining most of the variance of the original data.
Here we'll use a common but powerful dimensionality reduction
technique called Principal Component Analysis (PCA).
We'll perform PCA on the iris dataset that we saw before:
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=2, whiten=True)
pca.fit(X)
Explanation: PCA is performed using linear combinations of the original features
using a truncated Singular Value Decomposition of the matrix X so
as to project the data onto a base of the top singular vectors.
If the number of retained components is 2 or 3, PCA can be used
to visualize the dataset.
End of explanation
pca.components_
Explanation: Once fitted, the pca model exposes the singular vectors in the components_ attribute:
End of explanation
pca.explained_variance_ratio_
pca.explained_variance_ratio_.sum()
Explanation: Other attributes are available as well:
End of explanation
X_pca = pca.transform(X)
Explanation: Let us project the iris dataset along those first two dimensions:
End of explanation
X_pca.mean(axis=0)
X_pca.std(axis=0)
Explanation: PCA normalizes and whitens the data, which means that the data
is now centered on both components with unit variance:
End of explanation
import numpy as np
np.corrcoef(X_pca.T)
Explanation: Furthermore, the samples components do no longer carry any linear correlation:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
target_ids = range(len(iris.target_names))
plt.figure()
for i, c, label in zip(target_ids, 'rgbcmykw', iris.target_names):
plt.scatter(X_pca[y == i, 0], X_pca[y == i, 1],
c=c, label=label)
plt.legend();
Explanation: We can visualize the projection using pylab
End of explanation
from sklearn.datasets import make_s_curve
X, y = make_s_curve(n_samples=1000)
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], X[:, 2], c=y)
ax.view_init(10, -60)
Explanation: Note that this projection was determined without any information about the
labels (represented by the colors): this is the sense in which the learning
is unsupervised. Nevertheless, we see that the projection gives us insight
into the distribution of the different flowers in parameter space: notably,
iris setosa is much more distinct than the other two species.
Note also that the default implementation of PCA computes the
singular value decomposition (SVD) of the full
data matrix, which is not scalable when both n_samples and
n_features are big (more that a few thousands).
If you are interested in a number of components that is much
smaller than both n_samples and n_features, consider using
sklearn.decomposition.RandomizedPCA instead.
Manifold Learning
One weakness of PCA is that it cannot detect non-linear features. A set
of algorithms known as Manifold Learning have been developed to address
this deficiency. A canonical dataset used in Manifold learning is the
S-curve, which we briefly saw in an earlier section:
End of explanation
X_pca = PCA(n_components=2).fit_transform(X)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y);
Explanation: This is a 2-dimensional dataset embedded in three dimensions, but it is embedded
in such a way that PCA cannot discover the underlying data orientation:
End of explanation
from sklearn.manifold import LocallyLinearEmbedding, Isomap
lle = LocallyLinearEmbedding(n_neighbors=15, n_components=2, method='modified')
X_lle = lle.fit_transform(X)
plt.scatter(X_lle[:, 0], X_lle[:, 1], c=y);
iso = Isomap(n_neighbors=15, n_components=2)
X_iso = iso.fit_transform(X)
plt.scatter(X_iso[:, 0], X_iso[:, 1], c=y);
Explanation: Manifold learning algorithms, however, available in the sklearn.manifold
submodule, are able to recover the underlying 2-dimensional manifold:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
# ...
Explanation: Exercise: Dimension reduction of digits
Apply PCA, LocallyLinearEmbedding, and Isomap to project the data to two dimensions.
Which visualization technique separates the classes most cleanly?
End of explanation
from sklearn.decomposition import PCA
from sklearn.manifold import Isomap, LocallyLinearEmbedding
plt.figure(figsize=(14, 4))
for i, est in enumerate([PCA(n_components=2, whiten=True),
Isomap(n_components=2, n_neighbors=10),
LocallyLinearEmbedding(n_components=2, n_neighbors=10, method='modified')]):
plt.subplot(131 + i)
projection = est.fit_transform(digits.data)
plt.scatter(projection[:, 0], projection[:, 1], c=digits.target)
plt.title(est.__class__.__name__)
Explanation: Solution:
End of explanation |
8,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<center> Introduction to Spark In-memmory Computing via Python PySpark </center>
Step1: Airlines Data
Spark SQL
- Spark module for structured data processing
- provide more information about the structure of both the data and the computation being performed for additional optimization
- execute SQL queries written using either a basic SQL syntax or HiveQL
DataFrame
- distributed collection of data organized into named columns
- conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood
- can be constructed from a wide array of sources such as
Step2: You can interact with a DataFrame via SQLContext using SQL statements by registerting the DataFrame as a table
Step3: How many unique airlines are there?
Step4: Calculate how many flights completed by each carrier over time
Step5: How do you display full carrier names?
Step6: What is the averaged departure delay time for each airline? | Python Code:
import sys
import os
sys.path.insert(0, '/usr/hdp/2.6.0.3-8/spark2/python')
sys.path.insert(0, '/usr/hdp/2.6.0.3-8/spark2/python/lib/py4j-0.10.4-src.zip')
os.environ['SPARK_HOME'] = '/usr/hdp/2.6.0.3-8/spark2/'
os.environ['SPARK_CONF_DIR'] = '/etc/hadoop/synced_conf/spark2/'
os.environ['PYSPARK_PYTHON'] = '/software/anaconda3/4.2.0/bin/python'
import pyspark
conf = pyspark.SparkConf()
conf.setMaster("yarn")
conf.set("spark.driver.memory","4g")
conf.set("spark.executor.memory","60g")
conf.set("spark.num.executors","3")
conf.set("spark.executor.cores","12")
sc = pyspark.SparkContext(conf=conf)
Explanation: <center> Introduction to Spark In-memmory Computing via Python PySpark </center>
End of explanation
sqlContext = pyspark.SQLContext(sc)
sqlContext
airlines = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/repository/airlines/data/")\
.cache()
%%time
airlines.count()
%%time
airlines.count()
airlines.printSchema()
Explanation: Airlines Data
Spark SQL
- Spark module for structured data processing
- provide more information about the structure of both the data and the computation being performed for additional optimization
- execute SQL queries written using either a basic SQL syntax or HiveQL
DataFrame
- distributed collection of data organized into named columns
- conceptually equivalent to a table in a relational database or a data frame in R/Python, but with richer optimizations under the hood
- can be constructed from a wide array of sources such as: structured data files, tables in Hive, external databases, or existing RDDs.
End of explanation
airlines.registerTempTable("airlines")
Explanation: You can interact with a DataFrame via SQLContext using SQL statements by registerting the DataFrame as a table
End of explanation
uniqueAirline = sqlContext.sql("SELECT DISTINCT UniqueCarrier \
FROM airlines")
uniqueAirline.show()
Explanation: How many unique airlines are there?
End of explanation
%%time
carrierFlightCount = sqlContext.sql("SELECT UniqueCarrier, COUNT(UniqueCarrier) AS FlightCount \
FROM airlines GROUP BY UniqueCarrier")
carrierFlightCount.show()
Explanation: Calculate how many flights completed by each carrier over time
End of explanation
carriers = sqlContext.read.format("com.databricks.spark.csv")\
.option("header", "true")\
.option("inferschema", "true")\
.load("/repository/airlines/metadata/carriers.csv")\
.cache()
carriers.registerTempTable("carriers")
carriers.printSchema()
%%time
carrierFlightCountFullName = sqlContext.sql("SELECT c.Description, a.UniqueCarrier, COUNT(a.UniqueCarrier) AS FlightCount \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier, c.Description \
ORDER BY a.UniqueCarrier")
carrierFlightCountFullName.show()
Explanation: How do you display full carrier names?
End of explanation
%%time
avgDepartureDelay = sqlContext.sql("SELECT FIRST(c.Description), FIRST(a.UniqueCarrier), AVG(a.DepDelay) AS AvgDepDelay \
FROM airlines AS a \
INNER JOIN carriers AS c \
ON c.Code = a.UniqueCarrier \
GROUP BY a.UniqueCarrier \
ORDER BY a.UniqueCarrier")
avgDepartureDelay.show()
airlines.unpersist()
sc.stop()
Explanation: What is the averaged departure delay time for each airline?
End of explanation |
8,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Quantum Approximate Optimization Algorithm for MAX-CUT
2018/6/6
Step1: The cost and driver Hamiltonians corresponding to the barbell graph are stored in QAOA object fields in the form of lists of PauliSums.
Step2: The identity term above is not necessary to the computation since global phase rotations on the wavefunction don't change the expectation value. We include it here purely as a demonstration. The cost function printed above is the negative of the traditional Max Cut operator. This is because QAOA is forumulated as the maximizaiton of the cost operator but the VQE algorithm in the pyQuil library performs a minimization.
QAOA requires the construction of a state parameterized by β and γ rotation angles
Step3: The above printout is a Quil program that can be executed on a QVM. QAOA has 2 methods of operation
Step4: get_angles() returns optimal β and γ angles. To view the probs of the state, you can call QAOA.probabilities(t) where t is a concatenation of β and γ, in that order. probabilities(t) takes β & γ, reconstructs the wave function, and returns their coefficients. A modified version can be used to print off the probabilities
Step5: As expected the bipartitioning of a graph with a single edge connecting 2 nodes corresponds to the state ${ \rvert 01 \rangle, \rvert 10 \rangle }$
oh... cool it actually does. Great, so far so good.
In this trivial example the QAOA finds angles that constuct a distribution peaked around the 2 degenerate solutions.
MAXCUT on larger graphs and alternative optimizers
Larger graph instances and different classical optimizers can be used with the QAOA. Here we consider a 6-node ring of disagrees (eh?). For an even number ring graph, the ring of disagrees corresponds to the antiferromagnet ground state –– ie
Step6: This graph could be passed to the maxcut_qaoa method, and a QAOA instance with the correct driver & cost Hamiltonian could be generated as before. In order to demonstrate the more general approach, along with some VQE options, we'll construct the cost and driver Hamiltonians directly with PauliSum and PauliTerm objects. To do this we parse the edges and nodes of the graph to construct the relevant operators
Step7: We'll also construct the initial state and pass this to the QAOA object. By default, QAOA uses the $\rvert + \rangle$ tensor product state. In other notebooks we'll demonstrate that you can use the driver_ref optional argument to pass a different starting state for QAOA.
Step8: We're now ready to instantuate the QAOA object! 🎉
Step9: We're interested in the bit strings returned from the QAOA algorithm. The get_angles() routine calls the VQE algorithm to find the best angles. We can then manually query the bit strings by rerunning the program and sampling many outputs.
Step10: We can see that the first 2 most frequently sampled strings are the alternating solutions to the ring graph (well damn, they are). Since we have to access the wave function, we can go one step further and view the probability distribution over the bit strings produced by our $p = 1$ circuit.
Step11: For larger graphcs the probability of sampling the correct string could be significantly smaller, though still peaked around the solution. Therewfore we'd want to increase the probability of sampling the solution relative to any other string. To do this we simply increase the number of steps $p$ in the algorithm. We might want to bootstrap the algorithm with angles from a lower number of steps. We can pass initial angles to the solver as optional arguments
Step12: We could also change the optimizer passed down to VQE via the QAOA interface. Let's say we want to use BFGS or another optimizer that can be wrapped in python. Simple pass it to QAOA via the minimizer, minimizer_args, and minimizer_kwargs keywords | Python Code:
import numpy as np
from grove.pyqaoa.maxcut_qaoa import maxcut_qaoa
from functools import reduce
barbell = [(0,1)] # graph is defined by a list of edges. Edge weights are assumed to be 1.0
steps = 1 # evolution path length ebtween the ref and cost hamiltonians
inst = maxcut_qaoa(barbell, steps=steps) # initializing problem instance
Explanation: The Quantum Approximate Optimization Algorithm for MAX-CUT
2018/6/6:7 –– WNixalo. Code along of QAOA_overview_maxcut.ipynb
I have no idea what I'm doing
The following is a step-by-step guide to running QAOA on the MacCut problem. In the debut paper on QAOA (arXiv: 1411.4028), Farhi, Goldstone, and Gutmann demonstrate that the lowest order approximation of the algorithm produced an approximation ratio of 0.6946 for the MaxCut problem on 3-regular graphs. You can use this notebook to set up an arbitrary graph for MaxCut and solve it using the QAOA algorithm via the Rigetti Forest service.
pyQAOA is a python library that implements the QAOA. It uses the PauliTerm and PauliSum objects from the pyQuil library for expressing the cost and driver Hamiltonians. These operators are used to create a parametric pyQuil program and passed to the variational quantum eigensolver (VQE) in Grove. VQE calls the Rigetti Forest QVM to execute the Quil program that prepares the angle parameterized state. There're muliple ways to construct the MAX-CUT problem for the QAOA library. We include a method that accepts a graph and returns a QAOA instance where the costs and driver Hamiltonians have been constructed. The graph is either an undirected NetworkX graph or a list of tuples where each tuple represents an edge between a pair of nodes.
We start by demonstrating the QAOA algorithm with the simplest instance of MAXX-CUT –– partitioning the nodes on a barbell graph. The barbell graph corresponds to a single edge connecting 2 nodes. The solution is a partitioning of the nodes into different sets ${0, 1}$.
End of explanation
cost_list, ref_list = inst.cost_ham, inst.ref_ham
cost_ham = reduce(lambda x,y: x + y, cost_list)
ref_ham = reduce(lambda x,y: x + y, ref_list)
print(cost_ham)
print(ref_ham)
Explanation: The cost and driver Hamiltonians corresponding to the barbell graph are stored in QAOA object fields in the form of lists of PauliSums.
End of explanation
param_prog = inst.get_parameterized_program()
prog = param_prog([1.2, 4.2])
print(prog)
Explanation: The identity term above is not necessary to the computation since global phase rotations on the wavefunction don't change the expectation value. We include it here purely as a demonstration. The cost function printed above is the negative of the traditional Max Cut operator. This is because QAOA is forumulated as the maximizaiton of the cost operator but the VQE algorithm in the pyQuil library performs a minimization.
QAOA requires the construction of a state parameterized by β and γ rotation angles:
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0A%5Cmid%20%5Cbeta%2C%20%5Cgamma%20%5Crangle%20%3D%20%5Cprod_%7Bp%3D0%7D%5E%7B%5Cmathrm%7Bsteps%7D%7D%5Cleft%28%20U%28%5Chat%7BH%7D_%7B%5Cmathrm%7Bdrive%7D%7D%2C%20%5Cbeta_%7Bp%7D%29U%28%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%2C%20%5Cgamma_%7Bp%7D%29%20%5Cright%29%5E%7B%5Cmathrm%7Bsteps%7D%7D%20%28%5Cmid%20%2B%5Crangle_%7BN-1%7D%5Cotimes%5Cmid%20%2B%20%5Crangle_%7BN-2%7D...%5Cotimes%5Cmid%20%2B%20%5Crangle_%7B0%7D%29.%0A%5Cend%7Balign%7D&mode=display">
The unitaries <img src="https://render.githubusercontent.com/render/math?math=U%28%5Chat%7BH%7D_%7B%5Cmathrm%7Bdrive%7D%7D%2C%20%5Cbeta_%7Bp%7D%29&mode=inline" style='display:inline; margin-top:0px;'> and <img src="https://render.githubusercontent.com/render/math?math=U%28%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%2C%20%5Cgamma_%7Bp%7D%29&mode=inline" style='display:inline; margin-top:0px;'> are exponentiations of the driver and cost Hamiltonians, respectively.
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0AU%28%5Chat%7BH%7D_%7B%5Cmathrm%7Bref%7D%7D%2C%20%5Cbeta_%7Bp%7D%29%20%3D%20e%5E%7B-i%20%5Cbeta_%7Bp%7D%20%5Chat%7BH%7D_%7Bdrive%7D%7D%20%5C%5C%0AU%28%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%2C%20%5Cgamma_%7Bp%7D%29%20%3D%20e%5E%7B-i%20%5Cgamma_%7Bp%7D%20%5Chat%7BH%7D_%7B%5Cmathrm%7BMAXCUT%7D%7D%7D%0A%5Cend%7Balign%7D&mode=display">
The QAOA algorithm relies on many constructions of a wavefunction via parameterized Quil and measurements on all qubits to evaluate an expectation value. In order to avoid needless classical computation, QAOA constructions this parametric program once at the beginning of the calculation and then uses this same program object throughout the computation. This is accomplished using the ParametricProgram object pyQuil that allows us to slot in a symbolic value for a parameterized gate.
The parameterized program object can be accessed through the QAOA method get_parameterized_program(). Calling this on an instantiated QAOA object returns a closure with a precomputed set of Quil Programs (wtf does that mean). Calling this closure with the parameters β and γ returns the circuit that has parameterized rotations (what).
End of explanation
betas, gammas = inst.get_angles()
print(betas, gammas)
Explanation: The above printout is a Quil program that can be executed on a QVM. QAOA has 2 methods of operation:
1. pre-computing the angles of rotation classically and using the quantum computer to measure expectation values through repeated experiments and,
2. installing a classical optimization loop on top of step 1 to optimally determine the angles.
Mode 2 is known as the Variational Quantum Eigensolver Algorith. The QAOA object wraps the instantiation of the VQE alorithm with aget_angles().
End of explanation
param_prog = inst.get_parameterized_program()
t = np.hstack((betas, gammas))
prog = param_prog(t)
wf = inst.qvm.wavefunction(prog)
wf = wf.amplitudes
for i in range(2**inst.n_qubits):
print(inst.states[i], np.conj(wf[i])*wf[i])
Explanation: get_angles() returns optimal β and γ angles. To view the probs of the state, you can call QAOA.probabilities(t) where t is a concatenation of β and γ, in that order. probabilities(t) takes β & γ, reconstructs the wave function, and returns their coefficients. A modified version can be used to print off the probabilities:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
from grove.pyqaoa.qaoa import QAOA
import pyquil.quil as pq
from pyquil.paulis import PauliSum, PauliTerm
from pyquil.gates import H
from pyquil.api import QVMConnection
# wonder why they call it "CXN". pyQuil docs called it "quantum_simulator"
CXN = QVMConnection() # heh, CXN --> "connection"?
# define 6-qubit ring
ring_size = 6
graph = nx.Graph()
for i in range(ring_size):
graph.add_edge(i, (i + 1) % ring_size)
nx.draw_circular(graph, node_color="#6CAFB7")
Explanation: As expected the bipartitioning of a graph with a single edge connecting 2 nodes corresponds to the state ${ \rvert 01 \rangle, \rvert 10 \rangle }$
oh... cool it actually does. Great, so far so good.
In this trivial example the QAOA finds angles that constuct a distribution peaked around the 2 degenerate solutions.
MAXCUT on larger graphs and alternative optimizers
Larger graph instances and different classical optimizers can be used with the QAOA. Here we consider a 6-node ring of disagrees (eh?). For an even number ring graph, the ring of disagrees corresponds to the antiferromagnet ground state –– ie: alternating spin-up spin-down.
do we have to analogize everything to a physical QM phenom or is that just narrative-momentum?
End of explanation
cost_operators = []
driver_operators = []
for i,j in graph.edges():
cost_operators.append(PauliTerm("Z", i, 0.5) *
PauliTerm("Z", j) +
PauliTerm("I", 0, -0.5))
for i in graph.nodes():
driver_operators.append(PauliSum([PauliTerm("X", i, 1.0)]))
Explanation: This graph could be passed to the maxcut_qaoa method, and a QAOA instance with the correct driver & cost Hamiltonian could be generated as before. In order to demonstrate the more general approach, along with some VQE options, we'll construct the cost and driver Hamiltonians directly with PauliSum and PauliTerm objects. To do this we parse the edges and nodes of the graph to construct the relevant operators:
<img src="https://render.githubusercontent.com/render/math?math=%5Cbegin%7Balign%7D%0A%5Chat%7BH%7D_%7B%5Cmathrm%7Bcost%7D%7D%20%3D%20%5Csum_%7B%5Clangle%20i%2C%20j%5Crangle%20%5Cin%20E%7D%5Cfrac%7B%5Csigma_%7Bi%7D%5E%7Bz%7D%5Csigma_%7Bj%7D%5E%7Bz%7D%20-%201%7D%7B2%7D%20%5C%5C%0A%5Chat%7BH%7D_%7B%5Cmathrm%7Bdrive%7D%7D%20%3D%20%5Csum_%7Bi%7D%5E%7Bn%7D-%5Csigma_%7Bi%7D%5E%7Bx%7D%0A%5Cend%7Balign%7D&mode=display">
where $\langle i, j \rangle \in E$ referes to the pairs of nodes that form the edges of the graph.
End of explanation
prog = pq.Program()
for i in graph.nodes():
prog.inst(H(i))
Explanation: We'll also construct the initial state and pass this to the QAOA object. By default, QAOA uses the $\rvert + \rangle$ tensor product state. In other notebooks we'll demonstrate that you can use the driver_ref optional argument to pass a different starting state for QAOA.
End of explanation
ring_cut_inst = QAOA(CXN, len(graph.nodes()), steps=1, ref_hamiltonian=driver_operators,
cost_ham=cost_operators, driver_ref=prog, store_basis=True,
rand_seed=42)
betas, gammas = ring_cut_inst.get_angles()
Explanation: We're now ready to instantuate the QAOA object! 🎉
End of explanation
from collections import Counter
# get the parameterized program
param_prog = ring_cut_inst.get_parameterized_program()
sampling_prog = param_prog(np.hstack((betas, gammas)))
# use the run_and)measure QVM API to prepare a circuit and then measure on the qubits
bitstring_samples = CXN.run_and_measure(quil_program=sampling_prog, qubits=range(len(graph.nodes())), trials=1000)
bitstring_tuples = map(tuple, bitstring_samples)
# aggregate the statistics
freq = Counter(bitstring_tuples)
most_frequent_bit_string = max(freq, key=lambda x: freq[x])
print(freq) ##for f in freq.items(): (print(f"{f[0]}, {f[1]}"))
print(f"The most frequently sampled string is {most_frequent_bit_string}")
Explanation: We're interested in the bit strings returned from the QAOA algorithm. The get_angles() routine calls the VQE algorithm to find the best angles. We can then manually query the bit strings by rerunning the program and sampling many outputs.
End of explanation
# plot strings!
n_qubits = len(graph.nodes())
def plot(inst, probs):
probs = probs.real
states = inst.states
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_xlabel("state",fontsize=20)
ax.set_ylabel("Probability",fontsize=20)
ax.set_xlim([0, 2**n_qubits])
rec = ax.bar(range(2**n_qubits), probs[:,0],)
num_states = [0,
int("".join(str(x) for x in [0,1] * (n_qubits//2)), 2),
int("".join(str(x) for x in [1,0] * (n_qubits//2)), 2),
2**n_qubits - 1]
ax.set_xticks(num_states)
ax.set_xticklabels(map(lambda x: inst.states[x], num_states), rotation=90)
plt.grid(True)
plt.tight_layout()
plt.show()
t = np.hstack((betas, gammas))
probs = ring_cut_inst.probabilities(t)
plot(ring_cut_inst, probs)
Explanation: We can see that the first 2 most frequently sampled strings are the alternating solutions to the ring graph (well damn, they are). Since we have to access the wave function, we can go one step further and view the probability distribution over the bit strings produced by our $p = 1$ circuit.
End of explanation
# get the angles from the last run
beta = ring_cut_inst.betas
gamma = ring_cut_inst.gammas
# form new beta/gamma angles from the old angles
betas = np.hstack((beta[0]/3, beta[0]*2/3))
gammas = np.hstack((gamma[0]/3, gamma[0]*2/3))
# set up a new QAOA instance
ring_cut_inst_2 = QAOA(CXN, len(graph.nodes()), steps=2,
ref_hamiltonian=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True,
init_betas=betas, init_gammas=gammas)
# run VQE to determine the optimal angles
betas, gammas = ring_cut_inst_2.get_angles()
t = np.hstack((betas, gammas))
probs = ring_cut_inst_2.probabilities(t)
plot(ring_cut_inst_2, probs)
Explanation: For larger graphcs the probability of sampling the correct string could be significantly smaller, though still peaked around the solution. Therewfore we'd want to increase the probability of sampling the solution relative to any other string. To do this we simply increase the number of steps $p$ in the algorithm. We might want to bootstrap the algorithm with angles from a lower number of steps. We can pass initial angles to the solver as optional arguments:
End of explanation
from scipy.optimize import fmin_bfgs
ring_cut_inst_3 = QAOA(CXN, len(graph.nodes()), steps=3,
ref_hamiltonian=driver_operators, cost_ham=cost_operators,
driver_ref=prog, store_basis=True,
minimizer=fmin_bfgs, minimizer_kwargs={'gtol':1.0e-3},
rand_seed=42)
betas,gammas = ring_cut_inst_3.get_angles()
t = np.hstack((betas, gammas))
probs = ring_cut_inst_3.probabilities(t)
plot(ring_cut_inst_3, probs)
Explanation: We could also change the optimizer passed down to VQE via the QAOA interface. Let's say we want to use BFGS or another optimizer that can be wrapped in python. Simple pass it to QAOA via the minimizer, minimizer_args, and minimizer_kwargs keywords:
End of explanation |
8,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LEARNING APPLICATIONS
In this notebook we will take a look at some indicative applications of machine learning techniques. We will cover content from learning.py, for chapter 18 from Stuart Russel's and Peter Norvig's book Artificial Intelligence
Step1: CONTENTS
MNIST Handwritten Digits
Loading and Visualising
Testing
MNIST Fashion
MNIST HANDWRITTEN DIGITS CLASSIFICATION
The MNIST Digits database, available from this page, is a large database of handwritten digits that is commonly used for training and testing/validating in Machine learning.
The dataset has 60,000 training images each of size 28x28 pixels with labels and 10,000 testing images of size 28x28 pixels with labels.
In this section, we will use this database to compare performances of different learning algorithms.
It is estimated that humans have an error rate of about 0.2% on this problem. Let's see how our algorithms perform!
NOTE
Step2: Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
Each 28x28 pixel image is flattened to a 784x1 array and we should have 60,000 of them in training data. Similarly, we should have 10,000 of those 784x1 arrays in testing data.
Step3: Visualizing Data
To get a better understanding of the dataset, let's visualize some random images for each class from training and testing datasets.
Step4: Let's have a look at the average of all the images of training and testing data.
Step5: Testing
Now, let us convert this raw data into DataSet.examples to run our algorithms defined in learning.py. Every image is represented by 784 numbers (28x28 pixels) and we append them with its label or class to make them work with our implementations in learning module.
Step6: Now, we will initialize a DataSet with our training examples, so we can use it in our algorithms.
Step7: Moving forward we can use MNIST_DataSet to test our algorithms.
Plurality Learner
The Plurality Learner always returns the class with the most training samples. In this case, 1.
Step8: It is obvious that this Learner is not very efficient. In fact, it will guess correctly in only 1135/10000 of the samples, roughly 10%. It is very fast though, so it might have its use as a quick first guess.
Naive-Bayes
The Naive-Bayes classifier is an improvement over the Plurality Learner. It is much more accurate, but a lot slower.
Step9: To make sure that the output we got is correct, let's plot that image along with its label.
Step10: k-Nearest Neighbors
We will now try to classify a random image from the dataset using the kNN classifier.
Step11: To make sure that the output we got is correct, let's plot that image along with its label.
Step12: Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset.
MNIST FASHION
Another dataset in the same format is MNIST Fashion. This dataset, instead of digits contains types of apparel (t-shirts, trousers and others). As with the Digits dataset, it is split into training and testing images, with labels from 0 to 9 for each of the ten types of apparel present in the dataset. The below table shows what each label means
Step13: Visualizing Data
Let's visualize some random images for each class, both for the training and testing sections
Step14: Let's now see how many times each class appears in the training and testing data
Step15: Unlike Digits, in Fashion all items appear the same number of times.
Testing
We will now begin testing our algorithms on Fashion.
First, we need to convert the dataset into the learning-compatible Dataset class
Step16: Plurality Learner
The Plurality Learner always returns the class with the most training samples. In this case, 9.
Step17: Naive-Bayes
The Naive-Bayes classifier is an improvement over the Plurality Learner. It is much more accurate, but a lot slower.
Step18: Let's check if we got the right output.
Step19: K-Nearest Neighbors
With the dataset in hand, we will first test how the kNN algorithm performs
Step20: The output is 1, which means the item at index 211 is a trouser. Let's see if the prediction is correct | Python Code:
from learning import *
from notebook import *
Explanation: LEARNING APPLICATIONS
In this notebook we will take a look at some indicative applications of machine learning techniques. We will cover content from learning.py, for chapter 18 from Stuart Russel's and Peter Norvig's book Artificial Intelligence: A Modern Approach. Execute the cell below to get started:
End of explanation
train_img, train_lbl, test_img, test_lbl = load_MNIST()
Explanation: CONTENTS
MNIST Handwritten Digits
Loading and Visualising
Testing
MNIST Fashion
MNIST HANDWRITTEN DIGITS CLASSIFICATION
The MNIST Digits database, available from this page, is a large database of handwritten digits that is commonly used for training and testing/validating in Machine learning.
The dataset has 60,000 training images each of size 28x28 pixels with labels and 10,000 testing images of size 28x28 pixels with labels.
In this section, we will use this database to compare performances of different learning algorithms.
It is estimated that humans have an error rate of about 0.2% on this problem. Let's see how our algorithms perform!
NOTE: We will be using external libraries to load and visualize the dataset smoothly (numpy for loading and matplotlib for visualization). You do not need previous experience of the libraries to follow along.
Loading MNIST Digits Data
Let's start by loading MNIST data into numpy arrays.
The function load_MNIST() loads MNIST data from files saved in aima-data/MNIST. It returns four numpy arrays that we are going to use to train and classify hand-written digits in various learning approaches.
End of explanation
print("Training images size:", train_img.shape)
print("Training labels size:", train_lbl.shape)
print("Testing images size:", test_img.shape)
print("Testing labels size:", test_lbl.shape)
Explanation: Check the shape of these NumPy arrays to make sure we have loaded the database correctly.
Each 28x28 pixel image is flattened to a 784x1 array and we should have 60,000 of them in training data. Similarly, we should have 10,000 of those 784x1 arrays in testing data.
End of explanation
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img)
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img)
Explanation: Visualizing Data
To get a better understanding of the dataset, let's visualize some random images for each class from training and testing datasets.
End of explanation
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img)
Explanation: Let's have a look at the average of all the images of training and testing data.
End of explanation
print(train_img.shape, train_lbl.shape)
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
print(training_examples.shape)
Explanation: Testing
Now, let us convert this raw data into DataSet.examples to run our algorithms defined in learning.py. Every image is represented by 784 numbers (28x28 pixels) and we append them with its label or class to make them work with our implementations in learning module.
End of explanation
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
Explanation: Now, we will initialize a DataSet with our training examples, so we can use it in our algorithms.
End of explanation
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
%matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
Explanation: Moving forward we can use MNIST_DataSet to test our algorithms.
Plurality Learner
The Plurality Learner always returns the class with the most training samples. In this case, 1.
End of explanation
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous = False)
print(nBD(test_img[0]))
Explanation: It is obvious that this Learner is not very efficient. In fact, it will guess correctly in only 1135/10000 of the samples, roughly 10%. It is very fast though, so it might have its use as a quick first guess.
Naive-Bayes
The Naive-Bayes classifier is an improvement over the Plurality Learner. It is much more accurate, but a lot slower.
End of explanation
%matplotlib inline
print("Actual class of test image:", test_lbl[0])
plt.imshow(test_img[0].reshape((28,28)))
Explanation: To make sure that the output we got is correct, let's plot that image along with its label.
End of explanation
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
Explanation: k-Nearest Neighbors
We will now try to classify a random image from the dataset using the kNN classifier.
End of explanation
%matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
Explanation: To make sure that the output we got is correct, let's plot that image along with its label.
End of explanation
train_img, train_lbl, test_img, test_lbl = load_MNIST(fashion=True)
Explanation: Hurray! We've got it correct. Don't worry if our algorithm predicted a wrong class. With this techinique we have only ~97% accuracy on this dataset.
MNIST FASHION
Another dataset in the same format is MNIST Fashion. This dataset, instead of digits contains types of apparel (t-shirts, trousers and others). As with the Digits dataset, it is split into training and testing images, with labels from 0 to 9 for each of the ten types of apparel present in the dataset. The below table shows what each label means:
| Label | Description |
| ----- | ----------- |
| 0 | T-shirt/top |
| 1 | Trouser |
| 2 | Pullover |
| 3 | Dress |
| 4 | Coat |
| 5 | Sandal |
| 6 | Shirt |
| 7 | Sneaker |
| 8 | Bag |
| 9 | Ankle boot |
Since both the MNIST datasets follow the same format, the code we wrote for loading and visualizing the Digits dataset will work for Fashion too! The only difference is that we have to let the functions know which dataset we're using, with the fashion argument. Let's start by loading the training and testing images:
End of explanation
# takes 5-10 seconds to execute this
show_MNIST(train_lbl, train_img, fashion=True)
# takes 5-10 seconds to execute this
show_MNIST(test_lbl, test_img, fashion=True)
Explanation: Visualizing Data
Let's visualize some random images for each class, both for the training and testing sections:
End of explanation
print("Average of all images in training dataset.")
show_ave_MNIST(train_lbl, train_img, fashion=True)
print("Average of all images in testing dataset.")
show_ave_MNIST(test_lbl, test_img, fashion=True)
Explanation: Let's now see how many times each class appears in the training and testing data:
End of explanation
temp_train_lbl = train_lbl.reshape((60000,1))
training_examples = np.hstack((train_img, temp_train_lbl))
# takes ~10 seconds to execute this
MNIST_DataSet = DataSet(examples=training_examples, distance=manhattan_distance)
Explanation: Unlike Digits, in Fashion all items appear the same number of times.
Testing
We will now begin testing our algorithms on Fashion.
First, we need to convert the dataset into the learning-compatible Dataset class:
End of explanation
pL = PluralityLearner(MNIST_DataSet)
print(pL(177))
%matplotlib inline
print("Actual class of test image:", test_lbl[177])
plt.imshow(test_img[177].reshape((28,28)))
Explanation: Plurality Learner
The Plurality Learner always returns the class with the most training samples. In this case, 9.
End of explanation
# takes ~45 Secs. to execute this
nBD = NaiveBayesLearner(MNIST_DataSet, continuous = False)
print(nBD(test_img[24]))
Explanation: Naive-Bayes
The Naive-Bayes classifier is an improvement over the Plurality Learner. It is much more accurate, but a lot slower.
End of explanation
%matplotlib inline
print("Actual class of test image:", test_lbl[24])
plt.imshow(test_img[24].reshape((28,28)))
Explanation: Let's check if we got the right output.
End of explanation
# takes ~20 Secs. to execute this
kNN = NearestNeighborLearner(MNIST_DataSet, k=3)
print(kNN(test_img[211]))
Explanation: K-Nearest Neighbors
With the dataset in hand, we will first test how the kNN algorithm performs:
End of explanation
%matplotlib inline
print("Actual class of test image:", test_lbl[211])
plt.imshow(test_img[211].reshape((28,28)))
Explanation: The output is 1, which means the item at index 211 is a trouser. Let's see if the prediction is correct:
End of explanation |
8,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Environment Loading Examples
In this notebook, we walk through a few examples of how to load and interact with the Construction environments, both using discrete relative actions with graph observations, and using continuous absolute actions with image observations.
For further details, see the Documentation.
Step3: Installation
From the root of this repository, run pip install .[demos] to install both dm_construction and extra dependencies needed to run this notebook.
Install ffmpeg
Step4: Supported tasks and wrappers
These are the tasks that can be loaded
Step5: These are the wrappers that can be applied to the tasks
Step6: Discrete Relative Actions and Graph Observations
The discrete_relative wrapper exposes graph-based discrete relative actions and graph observations. Here is an example of loading the Covering task with this wrapper and taking some actions in the environment.
Because the observations are graphs, they are not easy to visualize. Instead, we will grab image observations from the underyling task environment and display those instead.
Step7: Continuous Absolute Actions and Image Observations
The continuous_absolute wrapper exposes continuous absolute actions and image observations. Here is an example of loading the Covering task with this wrapper, taking some actions in the environment, and displaying the resulting observations.
Step11: Creating Videos
Because physics is simulated for many timesteps in between each action, it can be nice to grab all of those intermediate frames (the observations exposed to the agent are only the final frame of the simulation). To do this, we will enable a special observer camera in the underlying Unity environment and then pull frames from this to create a video. | Python Code:
# Copyright 2020 DeepMind Technologies Limited
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Environment Loading Examples
In this notebook, we walk through a few examples of how to load and interact with the Construction environments, both using discrete relative actions with graph observations, and using continuous absolute actions with image observations.
For further details, see the Documentation.
End of explanation
import base64
import tempfile
import textwrap
import dm_construction
from IPython.display import HTML
from matplotlib import animation
import matplotlib.pyplot as plt
import numpy as np
## Helper Functions
def show_rgb_observation(rgb_observation, size=5):
Plots a RGB observation, as returned from a Unity environment.
Args:
rgb_observation: numpy array of pixels
size: size to set the figure
_, ax = plt.subplots(figsize=(size, size))
ax.imshow(rgb_observation)
ax.set_axis_off()
ax.set_aspect("equal")
def print_status(env_, time_step_):
Prints reward and episode termination information.
status = "r={}, p={}".format(time_step_.reward, time_step_.discount)
if time_step_.discount == 0:
status += " (reason: {})".format(env_.termination_reason)
print(status)
Explanation: Installation
From the root of this repository, run pip install .[demos] to install both dm_construction and extra dependencies needed to run this notebook.
Install ffmpeg:
Cross-platform with Anaconda: conda install ffmpeg
Ubuntu: apt-get install ffmpeg
Mac with Homebrew: brew install ffmpeg
End of explanation
dm_construction.ALL_TASKS
Explanation: Supported tasks and wrappers
These are the tasks that can be loaded:
End of explanation
dm_construction.ALL_WRAPPERS
Explanation: These are the wrappers that can be applied to the tasks:
End of explanation
# Create the environment.
env = dm_construction.get_environment(
"covering", wrapper_type="discrete_relative", difficulty=0)
env.action_spec()
env.observation_spec()
np.random.seed(1234)
time_step = env.reset()
# Get the image observation from the task environment.
show_rgb_observation(env.core_env.last_time_step.observation["RGB"])
# Pick an edge.
obs = time_step.observation
moved_block = 0
base_block = 7
edge_index = list(
zip(obs["senders"], obs["receivers"])).index((moved_block, base_block))
# Construct the action.
action = {
"Index": edge_index,
"sticky": 1, # make it sticky
"x_action": 0, # place it to the left
}
time_step = env.step(action)
print_status(env, time_step)
# Get the image observation from the task environment.
show_rgb_observation(env.core_env.last_time_step.observation["RGB"])
# Pick an edge.
obs = time_step.observation
moved_block = 3
base_block = len(obs["nodes"]) - 1
edge_index = list(
zip(obs["senders"], obs["receivers"])).index((moved_block, base_block))
# Construct the action.
action = {
"Index": edge_index,
"sticky": 0, # make it not sticky
"x_action": 12, # place it to the right
}
time_step = env.step(action)
print_status(env, time_step)
# Get the image observation from the task environment.
show_rgb_observation(env.core_env.last_time_step.observation["RGB"])
# Stop the environment.
env.close()
Explanation: Discrete Relative Actions and Graph Observations
The discrete_relative wrapper exposes graph-based discrete relative actions and graph observations. Here is an example of loading the Covering task with this wrapper and taking some actions in the environment.
Because the observations are graphs, they are not easy to visualize. Instead, we will grab image observations from the underyling task environment and display those instead.
End of explanation
# Create the environment.
env = dm_construction.get_environment(
"covering", wrapper_type="continuous_absolute", difficulty=0)
env.action_spec()
env.observation_spec()
# Start a new episode.
np.random.seed(1234)
time_step = env.reset()
# This is the same observation that agents will see.
show_rgb_observation(time_step.observation)
# Place a block a bit to the right.
action = {
"Horizontal": 1,
"Vertical": 1,
"Sticky": -1,
"Selector": 0
}
time_step = env.step(action)
show_rgb_observation(time_step.observation)
print_status(env, time_step)
# Place another block in the center.
action = {
"Horizontal": 0,
"Vertical": 2,
"Sticky": 1,
"Selector": 0
}
time_step = env.step(action)
show_rgb_observation(time_step.observation)
print_status(env, time_step)
# Stop the environment.
env.close()
Explanation: Continuous Absolute Actions and Image Observations
The continuous_absolute wrapper exposes continuous absolute actions and image observations. Here is an example of loading the Covering task with this wrapper, taking some actions in the environment, and displaying the resulting observations.
End of explanation
def get_environment(problem_type, wrapper_type="discrete_relative",
difficulty=0, curriculum_sample=False):
Gets the environment.
This function separately creates the unity environment and then passes it to
the environment factory. We do this so that we can add an observer to the
unity environment to get all frames from which we will create a video.
Args:
problem_type: the name of the task
wrapper_type: the name of the wrapper
difficulty: the difficulty level
curriculum_sample: whether to sample difficulty from [0, difficulty]
Returns:
env_: the environment
# Separately construct the Unity env, so we can enable the observer camera
# and set a higher resolution on it.
unity_env = dm_construction.get_unity_environment(
observer_width=600,
observer_height=600,
include_observer_camera=True,
max_simulation_substeps=50)
# Create the main environment by passing in the already-created Unity env.
env_ = dm_construction.get_environment(
problem_type, unity_env, wrapper_type=wrapper_type,
curriculum_sample=curriculum_sample, difficulty=difficulty)
# Create an observer to grab the frames from the observer camera.
env_.core_env.enable_frame_observer()
return env_
def make_video(frames_):
Creates a video from a given set of frames.
# Create the Matplotlib animation and save it to a temporary file.
with tempfile.NamedTemporaryFile(suffix=".mp4") as fh:
writer = animation.FFMpegWriter(fps=20)
fig = plt.figure(frameon=False, figsize=(10, 10))
ax = fig.add_axes([0, 0, 1, 1])
ax.axis("off")
ax.set_aspect("equal")
im = ax.imshow(np.zeros_like(frames_[0]), interpolation="none")
with writer.saving(fig, fh.name, 50):
for frame in frames_:
im.set_data(frame)
writer.grab_frame()
plt.close(fig)
# Read and encode the video to base64.
mp4 = open(fh.name, "rb").read()
data_url = "data:video/mp4;base64," + base64.b64encode(mp4).decode()
# Display the video in the notebook.
return HTML(textwrap.dedent(
<video controls>
<source src="{}" type="video/mp4">
</video>
.format(data_url).strip()))
# Create the environment.
env = get_environment("covering", wrapper_type="continuous_absolute")
# Reset the episode.
np.random.seed(1234)
time_step = env.reset()
frames = env.core_env.pop_observer_frames()
# Take an action.
action = {
"Horizontal": 0,
"Vertical": 5,
"Sticky": 0,
"Selector": 0
}
time_step = env.step(action)
print_status(env, time_step)
# Get all the intermediate frames.
frames.extend(env.core_env.pop_observer_frames())
# Stop the environment.
env.close()
# Display the results as a video. Here you can see the block falling from a
# large height and eventually colliding with an obstacle.
make_video(frames)
# Create the environment.
env = get_environment("marble_run", wrapper_type="continuous_absolute")
# Reset the episode.
np.random.seed(1234)
time_step = env.reset()
frames = env.core_env.pop_observer_frames()
# Take an action.
action = {
"Horizontal": 0,
"Vertical": 5,
"Sticky": 1,
"Selector": 0
}
time_step = env.step(action)
print_status(env, time_step)
# Get all the intermediate frames.
frames.extend(env.core_env.pop_observer_frames())
# Stop the environment.
env.close()
# Display the results as a video
make_video(frames)
Explanation: Creating Videos
Because physics is simulated for many timesteps in between each action, it can be nice to grab all of those intermediate frames (the observations exposed to the agent are only the final frame of the simulation). To do this, we will enable a special observer camera in the underlying Unity environment and then pull frames from this to create a video.
End of explanation |
8,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Making Python faster
This homework provides practice in making Python code faster. Note that we start with functions that already use idiomatic numpy (which are about two orders of magnitude faster than the pure Python versions).
Functions to optimize
Step3: Data set for classification
Step4: Using gradient descent for classification by logistic regression
Step6: 1. Rewrite the logistic function so it only makes one np.exp call. Compare the time of both versions with the input x given below using the @timeit magic. (10 points)
Step10: 2. (20 points) Use numba to compile the gradient descent function.
Use the @vectorize decorator to create a ufunc version of the logistic function and call this logistic_numba_cpu with function signatures of float64(float64). Create another function called logistic_numba_parallel by giving an extra argument to the decorator of target=parallel (5 points)
For each function, check that the answers are the same as with the original logistic function using np.testing.assert_array_almost_equal. Use %timeit to compare the three logistic functions (5 points)
Now use @jit to create a JIT_compiled version of the logistic and gd functions, calling them logistic_numba and gd_numba. Provide appropriate function signatures to the decorator in each case. (5 points)
Compare the two gradient descent functions gd and gd_numba for correctness and performance. (5 points)
Step14: 3. (30 points) Use cython to compile the gradient descent function.
Cythonize the logistic function as logistic_cython. Use the --annotate argument to the cython magic function to find slow regions. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (10 points)
Now cythonize the gd function as gd_cython. This function should use of the cythonized logistic_cython as a C function call. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (20 points)
Hints
Step15: 4. (40 points) Wrapping modules in C++.
Rewrite the logistic and gd functions in C++, using pybind11 to create Python wrappers. Compare accuracy and performance as usual. Replicate the plotted example using the C++ wrapped functions for logistic and gd
Writing a vectorized logistic function callable from both C++ and Python (10 points)
Writing the gd function callable from Python (25 points)
Checking accuracy, benchmarking and creating diagnostic plots (5 points)
Hints | Python Code:
def logistic(x):
Logistic function.
return np.exp(x)/(1 + np.exp(x))
def gd(X, y, beta, alpha, niter):
Gradient descent algorihtm.
n, p = X.shape
Xt = X.T
for i in range(niter):
y_pred = logistic(X @ beta)
epsilon = y - y_pred
grad = Xt @ epsilon / n
beta += alpha * grad
return beta
x = np.linspace(-6, 6, 100)
plt.plot(x, logistic(x))
pass
Explanation: Making Python faster
This homework provides practice in making Python code faster. Note that we start with functions that already use idiomatic numpy (which are about two orders of magnitude faster than the pure Python versions).
Functions to optimize
End of explanation
n = 10000
p = 2
X, y = make_blobs(n_samples=n, n_features=p, centers=2, cluster_std=1.05, random_state=23)
X = np.c_[np.ones(len(X)), X]
y = y.astype('float')
Explanation: Data set for classification
End of explanation
# initial parameters
niter = 1000
α = 0.01
β = np.zeros(p+1)
# call gradient descent
β = gd(X, y, β, α, niter)
# assign labels to points based on prediction
y_pred = logistic(X @ β)
labels = y_pred > 0.5
# calculate separating plane
sep = (-β[0] - β[1] * X)/β[2]
plt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter')
plt.plot(X, sep, 'r-')
pass
Explanation: Using gradient descent for classification by logistic regression
End of explanation
np.random.seed(123)
n = int(1e7)
x = np.random.normal(0, 1, n)
def logistic2(x):
Logistic function.
return 1/(1 + np.exp(-x))
%timeit logistic(x)
%timeit logistic2(x)
Explanation: 1. Rewrite the logistic function so it only makes one np.exp call. Compare the time of both versions with the input x given below using the @timeit magic. (10 points)
End of explanation
@vectorize([float64(float64)], target='cpu')
def logistic_numba_cpu(x):
Logistic function.
return 1/(1 + math.exp(-x))
@vectorize([float64(float64)], target='parallel')
def logistic_numba_parallel(x):
Logistic function.
return 1/(1 + math.exp(-x))
np.testing.assert_array_almost_equal(logistic(x), logistic_numba_cpu(x))
np.testing.assert_array_almost_equal(logistic(x), logistic_numba_parallel(x))
%timeit logistic(x)
%timeit logistic_numba_cpu(x)
%timeit logistic_numba_parallel(x)
@jit(float64[:](float64[:]), nopython=True)
def logistic_numba(x):
return 1/(1 + np.exp(-x))
@jit(float64[:](float64[:,:], float64[:], float64[:], float64, int64), nopython=True)
def gd_numba(X, y, beta, alpha, niter):
Gradient descent algorihtm.
n, p = X.shape
Xt = X.T
for i in range(niter):
y_pred = logistic_numba(X @ beta)
epsilon = y - y_pred
grad = Xt @ epsilon / n
beta += alpha * grad
return beta
beta1 = gd(X, y, β, α, niter)
beta2 = gd_numba(X, y, β, α, niter)
np.testing.assert_almost_equal(beta1, beta2)
%timeit gd(X, y, β, α, niter)
%timeit gd_numba(X, y, β, α, niter)
# initial parameters
niter = 1000
α = 0.01
β = np.zeros(p+1)
# call gradient descent
β = gd_numba(X, y, β, α, niter)
# assign labels to points based on prediction
y_pred = logistic(X @ β)
labels = y_pred > 0.5
# calculate separating plane
sep = (-β[0] - β[1] * X)/β[2]
plt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter')
plt.plot(X, sep, 'r-')
pass
Explanation: 2. (20 points) Use numba to compile the gradient descent function.
Use the @vectorize decorator to create a ufunc version of the logistic function and call this logistic_numba_cpu with function signatures of float64(float64). Create another function called logistic_numba_parallel by giving an extra argument to the decorator of target=parallel (5 points)
For each function, check that the answers are the same as with the original logistic function using np.testing.assert_array_almost_equal. Use %timeit to compare the three logistic functions (5 points)
Now use @jit to create a JIT_compiled version of the logistic and gd functions, calling them logistic_numba and gd_numba. Provide appropriate function signatures to the decorator in each case. (5 points)
Compare the two gradient descent functions gd and gd_numba for correctness and performance. (5 points)
End of explanation
%%cython --annotate
import cython
import numpy as np
cimport numpy as np
from libc.math cimport exp
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
def logistic_cython(double[:] x):
Logistic function.
cdef int i
cdef int n = x.shape[0]
cdef double [:] s = np.empty(n)
for i in range(n):
s[i] = 1.0/(1.0 + exp(-x[i]))
return s
np.testing.assert_array_almost_equal(logistic(x), logistic_cython(x))
%timeit logistic2(x)
%timeit logistic_cython(x)
%%cython --annotate
import cython
import numpy as np
cimport numpy as np
from libc.math cimport exp
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
cdef double[:] logistic_(double[:] x):
Logistic function.
cdef int i
cdef int n = x.shape[0]
cdef double [:] s = np.empty(n)
for i in range(n):
s[i] = 1.0/(1.0 + exp(-x[i]))
return s
@cython.boundscheck(False)
@cython.wraparound(False)
@cython.cdivision(True)
def gd_cython(double[:, ::1] X, double[:] y, double[:] beta, double alpha, int niter):
Gradient descent algorihtm.
cdef int n = X.shape[0]
cdef int p = X.shape[1]
cdef double[:] eps = np.empty(n)
cdef double[:] y_pred = np.empty(n)
cdef double[:] grad = np.empty(p)
cdef int i, j, k
cdef double[:, :] Xt = X.T
for i in range(niter):
y_pred = logistic_(np.dot(X, beta))
for j in range(n):
eps[j] = y[j] - y_pred[j]
grad = np.dot(Xt, eps) / n
for k in range(p):
beta[k] += alpha * grad[k]
return beta
niter = 1000
alpha = 0.01
beta = np.random.random(X.shape[1])
beta1 = gd(X, y, β, α, niter)
beta2 = gd_cython(X, y, β, α, niter)
np.testing.assert_almost_equal(beta1, beta2)
%timeit gd(X, y, beta, alpha, niter)
%timeit gd_cython(X, y, beta, alpha, niter)
Explanation: 3. (30 points) Use cython to compile the gradient descent function.
Cythonize the logistic function as logistic_cython. Use the --annotate argument to the cython magic function to find slow regions. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (10 points)
Now cythonize the gd function as gd_cython. This function should use of the cythonized logistic_cython as a C function call. Compare accuracy and performance. The final performance should be comparable to the numba cpu version. (20 points)
Hints:
Give static types to all variables
Know how to use def, cdef and cpdef
Use Typed MemoryViews
Find out how to transpose a Typed MemoryView to store the transpose of X
Typed MemoryVeiws are not numpy arrays - you often have to write explicit loops to operate on them
Use the cython boundscheck, wraparound, and cdivision operators
End of explanation
import os
if not os.path.exists('./eigen'):
! git clone https://github.com/RLovelett/eigen.git
%%file wrap.cpp
<%
cfg['compiler_args'] = ['-std=c++11']
cfg['include_dirs'] = ['./eigen']
setup_pybind11(cfg)
%>
#include <pybind11/pybind11.h>
#include <pybind11/numpy.h>
#include <pybind11/eigen.h>
namespace py = pybind11;
Eigen::VectorXd logistic(Eigen::VectorXd x) {
return 1.0/(1.0 + exp((-x).array()));
}
Eigen::VectorXd gd(Eigen::MatrixXd X, Eigen::VectorXd y, Eigen::VectorXd beta, double alpha, int niter) {
int n = X.rows();
Eigen::VectorXd y_pred;
Eigen::VectorXd resid;
Eigen::VectorXd grad;
Eigen::MatrixXd Xt = X.transpose();
for (int i=0; i<niter; i++) {
y_pred = logistic(X * beta);
resid = y - y_pred;
grad = Xt * resid / n;
beta = beta + alpha * grad;
}
return beta;
}
PYBIND11_PLUGIN(wrap) {
py::module m("wrap", "pybind11 example plugin");
m.def("gd", &gd, "The gradient descent fucntion.");
m.def("logistic", &logistic, "The logistic fucntion.");
return m.ptr();
}
import cppimport
cppimport.force_rebuild()
funcs = cppimport.imp("wrap")
np.testing.assert_array_almost_equal(logistic(x), funcs.logistic(x))
%timeit logistic(x)
%timeit funcs.logistic(x)
β = np.array([0.0, 0.0, 0.0])
gd(X, y, β, α, niter)
β = np.array([0.0, 0.0, 0.0])
funcs.gd(X, y, β, α, niter)
%timeit gd(X, y, β, α, niter)
%timeit funcs.gd(X, y, β, α, niter)
# initial parameters
niter = 1000
α = 0.01
β = np.zeros(p+1)
# call gradient descent
β = funcs.gd(X, y, β, α, niter)
# assign labels to points based on prediction
y_pred = funcs.logistic(X @ β)
labels = y_pred > 0.5
# calculate separating plane
sep = (-β[0] - β[1] * X)/β[2]
plt.scatter(X[:, 1], X[:, 2], c=labels, cmap='winter')
plt.plot(X, sep, 'r-')
pass
Explanation: 4. (40 points) Wrapping modules in C++.
Rewrite the logistic and gd functions in C++, using pybind11 to create Python wrappers. Compare accuracy and performance as usual. Replicate the plotted example using the C++ wrapped functions for logistic and gd
Writing a vectorized logistic function callable from both C++ and Python (10 points)
Writing the gd function callable from Python (25 points)
Checking accuracy, benchmarking and creating diagnostic plots (5 points)
Hints:
Use the C++ Eigen library to do vector and matrix operations
When calling the exponential function, you have to use exp(m.array()) instead of exp(m) if you use an Eigen dynamic template.
Use cppimport to simplify the wrapping for Python
See pybind11 docs
See my examples for help
End of explanation |
8,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Look at encounter durations to check how long patients are in hospital, for both RRT & non-RRT encounters.
When this was first done, we did not use the checkin time from the checkin table, where appropriate. Fixed here.
Starts with looking at the durations for the data we used in model, for rrt & non-rrt. Then goes bigger picture, to show subsets of data.
Step2: Quick numbers
Step4: For admit_type_cd!='0' & encntr_type_class_cd='391
Step7: Examining distribution of encounter durations (with loc_facility_cd)
Analyze the durations of the RRT event patients.
Step9: Let's look at durations for inpatients WITH RRTs from the Main Hospital where encounter_admit_type is not zero
Step11: Let's look at durations for inpatients WITHOUT RRTs from the Main Hospital where encounter_admit_type is not zero
Step12: Plot both together to see how encounter duration distributions are different
Step13: Even accounting for the hospital, inpatients status, and accounting for some admit_type_cd, the durations are still quite different betwen RRT & non-RRT.
Trying some subset vizualizations -- these show no difference
Step15: Despite controlling for patient parameters, patients with RRT events stay in the hospital longer than non-RRT event having patients.
Rerun previous EDA on hospital & patient types
Let's take a step back and look at the encounter table, for all hospitals and patient types [but using corrected time duration].
Step16: The notebook Probe_encounter_types_classes explores admit type, class types & counts
Step17: Group by facility
We want to pull from similar patient populations
Step18: Most number of results from 633867, or The Main Hospital
Step19: Looks like these three locations (633867, 4382264, 4382273) have about the same distribution.
Appropriate test to verify this
Step20: From scipy documentation
Step21: Let's compare encounter duration histograms for patients with RRT & without RRT events, and see if there is a right subset of data to be selected for modeling
(There is)
Step23: Plot RRT & non-RRT with different codes | Python Code:
import pandas as pd
import numpy as np
from impala.util import as_pandas
# connect to impala
from impala.dbapi import connect
conn = connect(host="mycluster.domain.com", port=my_impala_port_number)
# Make sure we're pulling from the right location
cur = conn.cursor()
cur.execute('use my_db')
import matplotlib.pyplot as plt
%matplotlib notebook
plt.style.use('ggplot')
# Show tables to verify you're actually pulling from sandbox
cur.execute('SHOW TABLES')
cur.fetchall()
Explanation: Look at encounter durations to check how long patients are in hospital, for both RRT & non-RRT encounters.
When this was first done, we did not use the checkin time from the checkin table, where appropriate. Fixed here.
Starts with looking at the durations for the data we used in model, for rrt & non-rrt. Then goes bigger picture, to show subsets of data.
End of explanation
query_TotalEncs =
SELECT count(1)
FROM (
SELECT DISTINCT encntr_id
FROM encounter
WHERE encntr_complete_dt_tm < 4000000000000
AND loc_facility_cd = '633867'
) t;
cur.execute(query_TotalEncs)
cur.fetchall()
Explanation: Quick numbers: # RRT events & total # encounters (for the main hospital)
For all patient & location types
End of explanation
query_TotalEncs =
SELECT count(1)
FROM (
SELECT DISTINCT encntr_id
FROM encounter
WHERE encntr_complete_dt_tm < 4e12
AND loc_facility_cd = '633867'
AND admit_type_cd!='0'
AND encntr_type_class_cd='391'
) t;
cur.execute(query_TotalEncs)
cur.fetchall()
Explanation: For admit_type_cd!='0' & encntr_type_class_cd='391
End of explanation
query_count =
SELECT count(*)
FROM (
SELECT DISTINCT ce.encntr_id
FROM clinical_event ce
INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id
WHERE ce.event_cd = '54411998'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
AND enc.loc_facility_cd = '633867'
AND enc.encntr_complete_dt_tm < 4e12
AND enc.admit_type_cd!='0'
AND enc.encntr_type_class_cd='391'
) AS A ;
cur.execute(query_count)
cur.fetchall()
query_count =
SELECT count(*)
FROM (
SELECT DISTINCT encntr_id
FROM encounter enc
WHERE enc.loc_facility_cd = '633867'
AND enc.encntr_complete_dt_tm < 4e12
AND enc.admit_type_cd!='0'
AND enc.encntr_type_class_cd='391'
AND encntr_id NOT IN (
SELECT DISTINCT ce.encntr_id
FROM clinical_event ce
INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id
WHERE ce.event_cd = '54411998'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
)
) AS A;
cur.execute(query_count)
cur.fetchall()
Explanation: Examining distribution of encounter durations (with loc_facility_cd)
Analyze the durations of the RRT event patients.
End of explanation
query =
SELECT
DISTINCT ce.encntr_id
, COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm) AS checkin_dt_tm
, enc.depart_dt_tm as depart_dt_tm
, (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours
, enc.reason_for_visit
, enc.admit_src_cd
, enc.admit_type_cd
FROM clinical_event ce
INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id
LEFT OUTER JOIN (
SELECT
ti.encntr_id AS encntr_id
, MIN(tc.checkin_dt_tm) AS checkin_dt_tm
FROM tracking_item ti
JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id
GROUP BY ti.encntr_id
) tci
ON tci.encntr_id = enc.encntr_id
WHERE enc.loc_facility_cd = '633867'
AND enc.encntr_complete_dt_tm < 4e12
AND enc.admit_type_cd!='0'
AND enc.encntr_type_class_cd='391'
AND enc.encntr_id IN (
SELECT DISTINCT ce.encntr_id
FROM clinical_event ce
WHERE ce.event_cd = '54411998'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
)
;
cur.execute(query)
df_rrt = as_pandas(cur)
df_rrt.head()
df_rrt.describe().T
# the mean stay is 292 hours (12.1 days).
# The median stay is 184 hours (7.67 days)
# The minimum stay is 8 hours. The longest stay is 3550 hours (~148 days)
plt.figure()
df_rrt.diff_hours.hist(bins = 300)
plt.xlim(0, 600)
# Records with short durations:
df_rrt[df_rrt.diff_hours < 12]
Explanation: Let's look at durations for inpatients WITH RRTs from the Main Hospital where encounter_admit_type is not zero
End of explanation
query =
SELECT DISTINCT
ce.encntr_id
, COALESCE(tci.checkin_dt_tm
, enc.arrive_dt_tm) AS checkin_dt_tm
, enc.depart_dt_tm as depart_dt_tm
, (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours
, enc.reason_for_visit
, enc.admit_src_cd
, enc.admit_type_cd
FROM clinical_event ce
INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id
LEFT OUTER JOIN (
SELECT
ti.encntr_id AS encntr_id
, MIN(tc.checkin_dt_tm) AS checkin_dt_tm
FROM tracking_item ti
JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id
GROUP BY ti.encntr_id
) tci
ON tci.encntr_id = enc.encntr_id
WHERE enc.loc_facility_cd = '633867'
AND enc.encntr_complete_dt_tm < 4e12
AND enc.admit_type_cd!='0'
AND enc.encntr_type_class_cd='391'
AND enc.encntr_id NOT IN (
SELECT DISTINCT ce.encntr_id
FROM clinical_event ce
WHERE ce.event_cd = '54411998'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
)
;
cur.execute(query)
df_nonrrt = as_pandas(cur)
df_nonrrt.describe().T
# NonRRT: The mean stay is 122 hours (5 days) // RRT: The mean stay is 292 hours (12.1 days).
# NonRRT: The median stay is 77 hours (3.21 days)// RRT: The median stay is 184 hours (7.67 days)
# NonRRT: The minimum stay is 0.08 hours // RRT: The minimum stay is ~8 hours.
plt.figure()
df_nonrrt.diff_hours.hist(bins = 500)
plt.xlim(0, 600)
Explanation: Let's look at durations for inpatients WITHOUT RRTs from the Main Hospital where encounter_admit_type is not zero
End of explanation
plt.figure(figsize = (10,8))
df_rrt.diff_hours.plot.hist(alpha=0.4, bins=400,normed=True)
df_nonrrt.diff_hours.plot.hist(alpha=0.4, bins=800,normed=True)
plt.xlabel('Hospital Stay Durations, hours', fontsize=14)
plt.ylabel('Normalized Frequency', fontsize=14)
plt.legend(['RRT', 'Non RRT'])
plt.tick_params(labelsize=14)
plt.xlim(0, 1000)
Explanation: Plot both together to see how encounter duration distributions are different
End of explanation
print df_nonrrt.admit_type_cd.value_counts()
print
print df_rrt.admit_type_cd.value_counts()
print df_nonrrt.admit_src_cd.value_counts()
print
print df_rrt.admit_src_cd.value_counts()
plt.figure(figsize = (10,8))
df_rrt[df_rrt.admit_type_cd=='309203'].diff_hours.plot.hist(alpha=0.4, bins=300,normed=True)
df_nonrrt[df_nonrrt.admit_type_cd=='309203'].diff_hours.plot.hist(alpha=0.4, bins=600,normed=True)
# plt.xlabel('Hospital Stay Durations, hours', fontsize=14)
# plt.ylabel('Normalized Frequency', fontsize=14)
plt.legend(['RRT', 'Non RRT'])
plt.tick_params(labelsize=14)
plt.xlim(0, 1000)
plt.figure(figsize = (10,8))
df_rrt[df_rrt.admit_src_cd=='309196'].diff_hours.plot.hist(alpha=0.4, bins=300,normed=True)
df_nonrrt[df_nonrrt.admit_src_cd=='309196'].diff_hours.plot.hist(alpha=0.4, bins=600,normed=True)
# plt.xlabel('Hospital Stay Durations, days', fontsize=14)
# plt.ylabel('Normalized Frequency', fontsize=14)
plt.legend(['RRT', 'Non RRT'])
plt.tick_params(labelsize=14)
plt.xlim(0, 1000)
Explanation: Even accounting for the hospital, inpatients status, and accounting for some admit_type_cd, the durations are still quite different betwen RRT & non-RRT.
Trying some subset vizualizations -- these show no difference
End of explanation
# For encounters with RRT events
query =
SELECT DISTINCT
ce.encntr_id
, COALESCE(tci.checkin_dt_tm
, enc.arrive_dt_tm) AS checkin_dt_tm
, enc.depart_dt_tm as depart_dt_tm
, (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours
, enc.reason_for_visit
, enc.admit_type_cd, cv_admit_type.description as admit_type_desc
, enc.encntr_type_cd
, cv_enc_type.description as enc_type_desc
, enc.encntr_type_class_cd
, cv_enc_type_class.description as enc_type_class_desc
, enc.admit_src_cd
, cv_admit_src.description as admit_src_desc
, enc.loc_facility_cd
, cv_loc_fac.description as loc_desc
FROM clinical_event ce
INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id
LEFT OUTER JOIN code_value cv_admit_type ON enc.admit_type_cd = cv_admit_type.code_value
LEFT OUTER JOIN code_value cv_enc_type ON enc.encntr_type_cd = cv_enc_type.code_value
LEFT OUTER JOIN code_value cv_enc_type_class ON enc.encntr_type_class_cd = cv_enc_type_class.code_value
LEFT OUTER JOIN code_value cv_admit_src ON enc.admit_src_cd = cv_admit_src.code_value
LEFT OUTER JOIN code_value cv_loc_fac ON enc.loc_facility_cd = cv_loc_fac.code_value
LEFT OUTER JOIN (
SELECT
ti.encntr_id AS encntr_id
, MIN(tc.checkin_dt_tm) AS checkin_dt_tm
FROM tracking_item ti
JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id
GROUP BY ti.encntr_id
) tci
ON tci.encntr_id = enc.encntr_id
WHERE enc.encntr_id IN (
SELECT DISTINCT ce.encntr_id
FROM clinical_event ce
WHERE ce.event_cd = '54411998'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
)
;
cur.execute(query)
df = as_pandas(cur)
df.describe().T
# check nulls
print df[pd.isnull(df.diff_hours)].count()
print
print df[~pd.isnull(df.diff_hours)].count()
df[pd.isnull(df.diff_hours)]
# can't work with the nans in there... delete these rows
print df.shape
df = df[~pd.isnull(df['depart_dt_tm'])]
df = df.reset_index(drop=True)
print df.shape
df.describe().T
# RRT encounters for all patients/hospitals
# All RRT: mean stay: 293.5 hours // NonRRT: The mean stay is 122 hours (5 days) // RRT: The mean stay is 292 hours (12.1 days).
# All RRT: median stay: 190 hours // NonRRT: The median stay is 77 hours (3.21 days)// RRT: The median stay is 184 hours (7.67 days)
# All RRT: min stay: 0 hours // NonRRT: The minimum stay is 0.08 hours // RRT: The minimum stay is ~8 hours.
# Let's be suspicious of short encounters, say, under 6 hours.
# There are two cases where the number of hours = 0, these both have admit_type_cd=0, loc_facility_cd = 4382287. & ecntr_type_class_cd=393
df[df.diff_hours < 6]
Explanation: Despite controlling for patient parameters, patients with RRT events stay in the hospital longer than non-RRT event having patients.
Rerun previous EDA on hospital & patient types
Let's take a step back and look at the encounter table, for all hospitals and patient types [but using corrected time duration].
End of explanation
plt.figure()
df['diff_hours'].plot.hist(bins=500)
plt.xlabel("Hospital Stay Duration, days")
plt.title("Range of stays, patients with RRT")
plt.xlim(0, 2000)
Explanation: The notebook Probe_encounter_types_classes explores admit type, class types & counts
End of explanation
df.head()
df.loc_desc.value_counts()
grouped = df.groupby('loc_desc')
grouped.describe()
Explanation: Group by facility
We want to pull from similar patient populations
End of explanation
df.diff_hours.hist(by=df.loc_desc, bins=300)
# Use locations 4382264, 4382273, 633867
plt.figure(figsize=(12, 6))
df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True)
df[df['loc_facility_cd']=='4382264']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True)
df[df['loc_facility_cd']=='4382273']['diff_hours'].plot.hist(alpha=0.4, bins=300,normed=True)
plt.xlabel('Hospital Stay Durations, days', fontsize=14)
plt.ylabel('Normalized Frequency', fontsize=14)
# plt.legend(['633867', '4382264', '4382273'])
plt.legend(["Main Hospital", "Sattelite Hospital 1", "Sattelite Hospital 2"])
plt.tick_params(labelsize=14)
plt.xlim(0, 1000)
Explanation: Most number of results from 633867, or The Main Hospital
End of explanation
from scipy.stats import ks_2samp
ks_2samp(df[df['loc_facility_cd']=='633867']['diff_hours'],df[df['loc_facility_cd']=='4382264']['diff_hours'])
# Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((1775+582)/(1775*582)) = 0.065
# 0.074 > 0.065 -> null hypothesis rejected at level 0.05. --> histograms are different
ks_2samp(df[df['loc_facility_cd']=='4382264']['diff_hours'], df[df['loc_facility_cd']=='4382273']['diff_hours'])
# Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((997+582)/(997*582)) = 0.071
# 0.05 !> 0.071 -> fail to reject null hypothesis at level 0.05. --> histograms are similar
ks_2samp(df[df['loc_facility_cd']=='633867']['diff_hours'],df[df['loc_facility_cd']=='4382273']['diff_hours'])
# Critical test statistic at alpha = 0.05: = 1.36 * sqrt((n1+n2)/n1*n2) = 1.36*(sqrt((1775+997)/(1775*997)) = 0.054
# 0.094 > 0.054 -> null hypothesis rejected at level 0.05. --> histograms are different; p-value indicates they're very different
Explanation: Looks like these three locations (633867, 4382264, 4382273) have about the same distribution.
Appropriate test to verify this: 2-sample Kolmogorov-Smirnov, if you're willing to compare pairwise...other tests? Wikipedia has a good article with references: https://en.wikipedia.org/wiki/Kolmogorov–Smirnov_test. Null hypothesis: the samples come from the same distribution. The null hypothesis is rejected if the test statistic is greater than the critical value (see wiki article)
End of explanation
plt.figure(figsize=(10,8))
df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.4, bins=500,normed=True)
df[df['loc_facility_cd']=='4382273']['diff_hours'].plot.hist(alpha=0.4, bins=700,normed=True)
plt.xlabel('Hospital Stay Durations, hours')
plt.legend(['633867', '4382273'])
plt.xlim(0, 1000)
Explanation: From scipy documentation: "If the KS statistic is small or the p-value is high, then we cannot reject the hypothesis that the distributions of the two samples are the same"
Null hypothesis: the distributions are the same.
Looks like samples from 4382273 are different... plot that & 633867
End of explanation
df.columns
df.admit_src_desc.value_counts()
df.enc_type_class_desc.value_counts()
# vast majority are inpatient
df.enc_type_desc.value_counts()
df.admit_type_desc.value_counts()
Explanation: Let's compare encounter duration histograms for patients with RRT & without RRT events, and see if there is a right subset of data to be selected for modeling
(There is)
End of explanation
# For encounters without RRT events, from Main Hospital.
# takes a while to run -- several minutes
query =
SELECT DISTINCT
ce.encntr_id
, COALESCE(tci.checkin_dt_tm
, enc.arrive_dt_tm) AS checkin_dt_tm
, enc.depart_dt_tm as depart_dt_tm
, (enc.depart_dt_tm - COALESCE(tci.checkin_dt_tm, enc.arrive_dt_tm))/3600000 AS diff_hours
, enc.reason_for_visit
, enc.admit_type_cd
, cv_admit_type.description as admit_type_desc
, enc.encntr_type_cd
, cv_enc_type.description as enc_type_desc
, enc.encntr_type_class_cd
, cv_enc_type_class.description as enc_type_class_desc
, enc.admit_src_cd
, cv_admit_src.description as admit_src_desc
, enc.loc_facility_cd
, cv_loc_fac.description as loc_desc
FROM clinical_event ce
INNER JOIN encounter enc ON enc.encntr_id = ce.encntr_id
LEFT OUTER JOIN code_value cv_admit_type ON enc.admit_type_cd = cv_admit_type.code_value
LEFT OUTER JOIN code_value cv_enc_type ON enc.encntr_type_cd = cv_enc_type.code_value
LEFT OUTER JOIN code_value cv_enc_type_class ON enc.encntr_type_class_cd = cv_enc_type_class.code_value
LEFT OUTER JOIN code_value cv_admit_src ON enc.admit_src_cd = cv_admit_src.code_value
LEFT OUTER JOIN code_value cv_loc_fac ON enc.loc_facility_cd = cv_loc_fac.code_value
LEFT OUTER JOIN (
SELECT
ti.encntr_id AS encntr_id
, MIN(tc.checkin_dt_tm) AS checkin_dt_tm
FROM tracking_item ti
JOIN tracking_checkin tc ON ti.tracking_id = tc.tracking_id
GROUP BY ti.encntr_id
) tci
ON tci.encntr_id = enc.encntr_id
WHERE enc.loc_facility_cd='633867'
AND enc.encntr_id NOT IN (
SELECT DISTINCT ce.encntr_id
FROM clinical_event ce
WHERE ce.event_cd = '54411998'
AND ce.result_status_cd NOT IN ('31', '36')
AND ce.valid_until_dt_tm > 4e12
AND ce.event_class_cd not in ('654645')
)
;
cur.execute(query)
df_nrrt = as_pandas(cur)
df_nrrt.describe()
df_nrrt[~pd.isnull(df_nrrt['depart_dt_tm'])].count()
# can't work with the nans in there... delete these rows
print df_nrrt.shape
df_nrrt = df_nrrt[~pd.isnull(df_nrrt['depart_dt_tm'])]
df_nrrt = df_nrrt.reset_index(drop=True)
print df_nrrt.shape
plt.figure(figsize=(10,8))
df[df['loc_facility_cd']=='633867']['diff_hours'].plot.hist(alpha=0.5, bins=500,normed=True)
df_nrrt['diff_hours'].plot.hist(alpha=0.5, bins=900,normed=True)
plt.xlabel('Stay Durations at Main Hospital [hours]')
plt.legend(['RRT patients', 'Non-RRT patients'])
plt.title('For all non-RRT patients')
plt.xlim(0, 800)
plt.figure(figsize=(10,8))
df[df['loc_facility_cd']=='633867']['diff_hours'][df.admit_type_cd != '0'].plot.hist(alpha=0.5, bins=500,normed=True)
df_nrrt['diff_hours'][df_nrrt.admit_type_cd != '0'].plot.hist(alpha=0.5, bins=900,normed=True)
plt.xlabel('Stay Durations at Main Hospital [hours]')
plt.legend(['RRT patients', 'Non-RRT patients'])
plt.title('For patients with admit_type_cd !=0')
plt.xlim(0, 800)
plt.figure(figsize=(10,8))
df[df['loc_facility_cd']=='633867']['diff_hours'][df.encntr_type_class_cd=='391'].plot.hist(alpha=0.5, bins=500,normed=True)
df_nrrt['diff_hours'][df_nrrt.encntr_type_class_cd=='391'].plot.hist(alpha=0.5, bins=900,normed=True)
plt.xlabel('Stay Durations at Main Hospital [hours]')
plt.legend(['RRT patients', 'Non-RRT patients'])
plt.title('For patients with encntr_type_class_cd=="391"')
plt.xlim(0, 800)
plt.figure(figsize=(10,8))
df[df['loc_facility_cd']=='633867']['diff_hours'][(df.encntr_type_class_cd=='391') & (df.admit_type_cd != '0')].plot.hist(alpha=0.5, bins=500,normed=True)
df_nrrt['diff_hours'][(df_nrrt.encntr_type_class_cd=='391') & (df_nrrt.admit_type_cd != '0')].plot.hist(alpha=0.5, bins=1000,normed=True)
plt.xlabel('Stay Durations at Main Hospital [hours]')
plt.legend(['RRT patients', 'Non-RRT patients'])
plt.title('For patients with encntr_type_class_cd=="391" & df.admit_type_cd != "0" ')
plt.xlim(0, 800)
df_nrrt.describe()
# There are values of diff_hours that are negative.
df_nrrt[df_nrrt.diff_hours<0].count()
# But, there are no such values after we correct for encounter type class & admit type
df_nrrt[(df_nrrt.encntr_type_class_cd=='391') & (df_nrrt.admit_type_cd != '0')][df_nrrt.diff_hours<0].count()
Explanation: Plot RRT & non-RRT with different codes
End of explanation |
8,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. La regulación del diámetro se hace mediante el control del filawinder. Los datos analizados son del día 16 de Junio del 2015
Los datos del experimento
Step1: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
Step2: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
Step3: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
Step4: Representación de X/Y
Step5: Analizamos datos del ratio
Step6: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$ | Python Code:
#Importamos las librerías utilizadas
import numpy as np
import pandas as pd
import seaborn as sns
#Mostramos las versiones usadas de cada librerías
print ("Numpy v{}".format(np.__version__))
print ("Pandas v{}".format(pd.__version__))
print ("Seaborn v{}".format(sns.__version__))
#Abrimos el fichero csv con los datos 2de la muestra
datos = pd.read_csv('ensayo1.CSV')
%pylab inline
#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar
columns = ['Diametro X','Diametro Y']
#Mostramos un resumen de los datos obtenidoss
datos[columns].describe()
#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]
Explanation: Análisis de los datos obtenidos
Uso de ipython para el análsis y muestra de los datos obtenidos durante la producción. La regulación del diámetro se hace mediante el control del filawinder. Los datos analizados son del día 16 de Junio del 2015
Los datos del experimento:
* Hora de inicio: 11:50
* Hora final : 12:20
* $T: 150ºC$
End of explanation
datos.ix[:, "Diametro X":"Diametro Y"].plot(figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r')
#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')
datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
End of explanation
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
Explanation: Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
End of explanation
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]
#datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
Explanation: Filtrado de datos
Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
End of explanation
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
Explanation: Representación de X/Y
End of explanation
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']
ratio.describe()
rolling_mean = pd.rolling_mean(ratio, 50)
rolling_std = pd.rolling_std(ratio, 50)
rolling_mean.plot(figsize=(12,6))
# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)
ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
Explanation: Analizamos datos del ratio
End of explanation
Th_u = 1.85
Th_d = 1.65
data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |
(datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]
data_violations.describe()
data_violations.plot(subplots=True, figsize=(12,12))
Explanation: Límites de calidad
Calculamos el número de veces que traspasamos unos límites de calidad.
$Th^+ = 1.85$ and $Th^- = 1.65$
End of explanation |
8,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cloud SQL basics
Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud Platform.
Install and Import the dependencies
!pip install pymysql \
import pymysql
Create Cloud SQL Instance
Unlike Bigquery and GCS, Cloud SQL is not a managed service and requires creation from the user. The user will be billed only for the resources used.
Follow the steps provided here to create a new Cloud SQL instance in your project.
Once the Cloud SQL instance starts running do following from GCP Console
Step1: Invoke Cloud SQL proxy without using any authentication
Step2: Invoke Cloud SQL proxy using any Service account JSON
Step3: Initialize a connection
To use the Cloud SQL client library, start by initializing a pymysql connection. The connection is used to establish a bridge between your machine running JupyterLab and Cloud SQL instance
Run the following to create a connection
Step4: Create a cursor for this connection to interact with the database.
Step5: Create a new table
Step6: Insert value in the table
Step7: Read value from the table
Step8: Do other SQL operations using cursor
The cursor provides the execute() method to execute any SQL query
Cursor also provides fetchone() and fetchall() methods to display either just one row or all the rows from the result of the query, if any | Python Code:
!wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O cloud_sql_proxy
!chmod +x cloud_sql_proxy
Explanation: Cloud SQL basics
Cloud SQL is a fully-managed database service that makes it easy to set up, maintain, manage, and administer your relational databases on Google Cloud Platform.
Install and Import the dependencies
!pip install pymysql \
import pymysql
Create Cloud SQL Instance
Unlike Bigquery and GCS, Cloud SQL is not a managed service and requires creation from the user. The user will be billed only for the resources used.
Follow the steps provided here to create a new Cloud SQL instance in your project.
Once the Cloud SQL instance starts running do following from GCP Console:
Create a new user, remember the username and password. \
Create a new database. \
Save the Instance Connection name from Instance description. \
Download SQL proxy
The Cloud SQL Proxy provides secure access to your Cloud SQL Second Generation instances without having to allowlist IP addresses or configure SSL.
End of explanation
!./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306 &
Explanation: Invoke Cloud SQL proxy without using any authentication
End of explanation
!./cloud_sql_proxy -instances=<INSTANCE_CONNECTION_NAME>=tcp:3306 \
-credential_file=<PATH_TO_KEY_FILE> &
Explanation: Invoke Cloud SQL proxy using any Service account JSON
End of explanation
import pymysql
# TODO(developer):
# Change USERNAME and PASSWORD to the user and password created on Cloud SQL instance
# Set DB to the name of the database to be connected to
connection = pymysql.connect(host='127.0.0.1',
user='USERNAME',
password='PASSWORD',
db='DB')
Explanation: Initialize a connection
To use the Cloud SQL client library, start by initializing a pymysql connection. The connection is used to establish a bridge between your machine running JupyterLab and Cloud SQL instance
Run the following to create a connection:
End of explanation
mycursor = connection.cursor()
Explanation: Create a cursor for this connection to interact with the database.
End of explanation
mycursor.execute("create table EMPLOYEE ( \
EMP_ID bigint not null, \
EMP_NAME varchar(50) not null, \
EMP_NO varchar(20) not null, \
HIRE_DATE date not null, \
IMAGE longblob, \
JOB varchar(30) not null, \
SALARY float not null, \
DEPT_ID integer not null, \
MNG_ID bigint, \
primary key (EMP_ID), \
unique (EMP_NO) \
);")
mycursor.fetchall()
print(mycursor.description)
Explanation: Create a new table
End of explanation
mycursor.execute("insert into EMPLOYEE (EMP_ID, EMP_NAME, EMP_NO, HIRE_DATE, JOB, SALARY, DEPT_ID, MNG_ID) \
values (7839, 'KING', 'E7839', Str_To_Date('17-11-1981', '%d-%m-%Y'), 'PRESIDENT', 5000, 10, null);")
Explanation: Insert value in the table
End of explanation
mycursor.execute("SELECT * FROM EMPLOYEE")
mycursor.fetchall()
Explanation: Read value from the table
End of explanation
#Execute a SQL command
mycursor.execute(SQL_COMMAND)
# Display all the rows from output of the previous execution using fetchall()
mycursor.fetchall()
# Display only one row from output of the previous execution using fetchall()
mycursor.fetchone()
Explanation: Do other SQL operations using cursor
The cursor provides the execute() method to execute any SQL query
Cursor also provides fetchone() and fetchall() methods to display either just one row or all the rows from the result of the query, if any
End of explanation |
8,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 21
Step1: We define our functional. Note that the functional is only a function of the derivative of $y$ rather than $y$ alone.
Step2: You may be tempted to do this
Step3: You wouldn't be wrong to do the above, however things can be made a bit easier. Let us look at each term in the ELE (Euler-Lagrange Equations)
Step4: So - since the first term is zero we can use the fundamental theorem of the calculus to perform the first integral. We add a constant and solve for the derivative.
Step5: This clearly indicates that
Step6: And that is a linear function. To the extent that this is a proof - you've proven that a straight line is the shortest distance between two points.
Using SymPy's Functions
SymPy has an euler_equation function that we can try to use, too
Step7: A bit messy, but correct nonetheless.
DIY
Step8: Attacking the problem this way leads to a second order ODE that we need to integrate. Although this could be done - the lack of an explicit $x$ dependence permits using an identity that makes the problem a bit easier.
An equivalent statement of the ELE is
Step9: Now we solve the differential equation using dsolve.
Step10: DIY
Step11: DIY | Python Code:
%matplotlib notebook
import sympy as sp
sp.init_printing()
f = sp.symbols('f', cls=sp.Function)
x, y = sp.symbols('x, y', real=True)
Explanation: Lecture 21: The Calculus of Variations
What to Learn?
The concept of a "function of functions" and the definition of a functional
The concept of finding a function that makes the functional an extremum
How to practically compute the functional derivative for simple problems
The concept of a conserved and non-conserved order parameter
The definition of the non-classical chemical potential in a heterogeneous system
How to use the functional derivative to solve for the order parameter profile through an interface
What to do?
Recall the arc length formula
Write down a functional for all arc lengths between two points
Find the shortest path length between two points (minimize the functional)
Using the above process, find the shape of the minimum soapfilm between two rings
Using the above process, set up the differential equation for a heterogeneous chemical system
$$
F(y_x,y,x)
$$
A functional is a function of functions. It is necessary to treat $x$, $y$, and $y_x$ as independent (as though they are held constant during partial differentiation.
On Your Own
An (Imperfect but Colorful) Analogy
Using the calculus of variations is like this: you want to travel to Phoenix, AZ but you don't yet know the cheapest and fastest way to get there. So - you imagine a nearly exhaustive list of ways to travel there (including things like walking, giant trebuchets, teleportation, etc.) and work out the costs and time required to each mode of transportation. Once you evaluate all the modes (consider each mode as a different function of cost and time) - you pick the mode (function) that is optimal for cost and time.
In a picture, imagine all the functions that connect the two points "A" and "B". We are searching for the function that minimizes the path between "A" and "B" subject to whatever constraints we place on the path. The calculus of variations is a formal mathematical strategy for FINDING that function from all possible functions.
If calculus describes how numbers behave when mapped through functions, then the calculus of variations describe how functions behave when mapped through functions-of-functions.
Using the mean value theorem you can derive a formula for arc-length that reads:
$$
L(x,y,y_x) = \int_a^b \sqrt{1+ \left( \frac{dy}{dx} \right)^2} dx
$$
You can integrate this expression between two points $a$ and $b$ on a function $y(x)$ to get the length of the line between $a$ and $b$. In the CoV we call $F$ a functional. A functional is a function of functions.
The utility of the CoV is to produce a differential equation that is subsequently solved to produce a function that makes $F$ an extreme value. In this case we are searching for the function $y(x)$ that minimizes $F$ betwen two points. The CoV tells us that the following equation must be true for $y(x)$ to make $F$ an extreme value:
$$
\frac{\delta F}{\delta y} = \frac{\partial F}{\partial y} - \frac{d}{dx} \left( \frac{\partial F}{\partial y_x} \right)= 0
$$
This expression (for one dependent and one independent variable) is the core of the CoV. IT is not the only result that can be developed, but for us this is the important one. We can start by writing the above equation "by hand".
We'll start with the usual imports:
End of explanation
f = sp.sqrt(1+(y(x).diff(x))**2)
f
Explanation: We define our functional. Note that the functional is only a function of the derivative of $y$ rather than $y$ alone.
End of explanation
f.diff(y)-(f.diff(y(x).diff(x))).diff(x)
Explanation: You may be tempted to do this:
End of explanation
firstTerm = f.diff(y(x))
firstTerm
secondTerm = f.diff(y(x).diff(x))
secondTerm
Explanation: You wouldn't be wrong to do the above, however things can be made a bit easier. Let us look at each term in the ELE (Euler-Lagrange Equations):
End of explanation
sp.var('C1')
integratedFunctional = sp.Eq(secondTerm,C1)
integratedFunctional
firstSolution = sp.solve(integratedFunctional, y(x).diff(x))
firstSolution
Explanation: So - since the first term is zero we can use the fundamental theorem of the calculus to perform the first integral. We add a constant and solve for the derivative.
End of explanation
functionalExtremizer = sp.dsolve(sp.Eq(y(x).diff(x),firstSolution[0]), y(x))
functionalExtremizer
Explanation: This clearly indicates that:
$$
\frac{dy}{dx} = C
$$
and from this point it should be clear that the function $y(x)$ and makes $F$ an extreme is:
$$
y = mx + b
$$
If you would like to have SymPy finish the calculation, you can write:
End of explanation
L = sp.sqrt(1+(y(x).diff(x))**2)
differentialEquationFromELFunction = sp.euler_equations(L, y(x), x)
differentialEquationFromELFunction
Explanation: And that is a linear function. To the extent that this is a proof - you've proven that a straight line is the shortest distance between two points.
Using SymPy's Functions
SymPy has an euler_equation function that we can try to use, too:
End of explanation
Lsoapfilm = y(x)*sp.sqrt(1+(y(x).diff(x))**2)
(sp.euler_equations(Lsoapfilm,y(x),x)[0].lhs).simplify()
Explanation: A bit messy, but correct nonetheless.
DIY: Find the Euler-Lagrange Equation (ELE)
Find the ELE for the functional, and if you can - solve for $y(x)$:
$$
v(y(x)) = \int_0^{\pi/2} (y_x^2 - y^2)dx
$$
The endpoint conditions are $y(0)=0$ and $y(\pi/2)=1$. For reference, the general solution is:
$$
y(x)=C_1 \sin(x) + C_2 \cos(x)
$$
Don't forget to check the end points of the domain to find the constants.
The Problem of a Minimum Soapfilm
A classic problem in wetting and capillary science is that of the minimum soapfilm between two rings. The soap film adopts a shape that minimizes its area.
The area of a soap film (found by rotating a curve through $2\pi$ around one axis is given by:
$$
A = L(x,y,y_x) = \int_{x_1}^{x_2} 2 \pi y (1+y_x^2)^{1/2} dx
$$
Note there is no explicit x dependence.
End of explanation
C2 = sp.symbols('C2', positive=True)
beltramiODE = sp.Eq(Lsoapfilm - y(x).diff(x)*Lsoapfilm.diff(y(x).diff(x)),C2)
beltramiODE.simplify()
Explanation: Attacking the problem this way leads to a second order ODE that we need to integrate. Although this could be done - the lack of an explicit $x$ dependence permits using an identity that makes the problem a bit easier.
An equivalent statement of the ELE is:
$$
\frac{d}{dx} \left(F - y_x \frac{\partial F}{\partial y_x} \right) = \frac{\partial F}{\partial x}
$$
If there is no explicit $x$ dependence therefore the RHS of the above equation is zero - the first integral can be had for "free". Adding the integration constant we have:
$$
F - y_x \frac{\partial F}{\partial y_x} = C_2
$$
We can therefore write:
End of explanation
sp.dsolve(beltramiODE,y(x))
Explanation: Now we solve the differential equation using dsolve.
End of explanation
# Find the constants if the curve is required to pass through a pair of particular points.
Explanation: DIY: Use the general solution and find the constants.
End of explanation
# Create an interactive widget to explore the values of the constants.
Explanation: DIY: Create a tool to explore the shape of different soapfilms.
End of explanation |
8,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Facies classification - Sequential Feature Selection
<a rel="license" href="http
Step1: Make performance scorers
Step2: Sequential Feature Selection with mlextend
http
Step3: The next cell will take many hours to run, skip it
Step4: Restart from here
Step5: It looks like the score stabilizes after about 6 features, reaches a max at 16, then begins to taper off after about 70 features. We will save the top 45 and the top 75. | Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.metrics import f1_score, accuracy_score, make_scorer
filename = 'engineered_features.csv'
training_data = pd.read_csv(filename)
training_data.describe()
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
y = training_data['Facies'].values
print y[25:40]
print np.shape(y)
X = training_data.drop(['Formation', 'Well Name','Facies'], axis=1)
print np.shape(X)
X.describe(percentiles=[.05, .25, .50, .75, .95])
scaler = preprocessing.StandardScaler().fit(X)
X = scaler.transform(X)
Explanation: Facies classification - Sequential Feature Selection
<a rel="license" href="http://creativecommons.org/licenses/by/4.0/"><img alt="Creative Commons License" style="border-width:0" src="https://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">The code and ideas in this notebook,</span> by <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Matteo Niccoli and Mark Dahl,</span> are licensed under a <a rel="license" href="http://creativecommons.org/licenses/by/4.0/">Creative Commons Attribution 4.0 International License</a>.
The mlxtend library used for the sequential feature selection is by Sebastian Raschka.
End of explanation
Fscorer = make_scorer(f1_score, average = 'micro')
Explanation: Make performance scorers
End of explanation
from sklearn.ensemble import RandomForestClassifier
Explanation: Sequential Feature Selection with mlextend
http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/
End of explanation
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
clf = RandomForestClassifier(random_state=49)
sfs = SFS(clf,
k_features=100,
forward=True,
floating=False,
scoring=Fscorer,
cv = 8,
n_jobs = -1)
sfs = sfs.fit(X, y)
np.save('sfs_RF_metric_dict.npy', sfs.get_metric_dict())
Explanation: The next cell will take many hours to run, skip it
End of explanation
# load previously saved dictionary
read_dictionary = np.load('sfs_RF_metric_dict.npy').item()
# plot results
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
# run this twice
fig = plt.figure()
ax = plot_sfs(read_dictionary, kind='std_err')
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 22
fig_size[1] = 18
plt.title('Sequential Forward Selection (w. StdDev)')
plt.grid()
plt.xticks( rotation='vertical')
locs, labels = plt.xticks()
plt.xticks( locs, labels)
plt.show()
Explanation: Restart from here
End of explanation
# save results to dataframe
selected_summary = pd.DataFrame.from_dict(read_dictionary).T
selected_summary['index'] = selected_summary.index
selected_summary.sort_values(by='avg_score', ascending=0)
# save dataframe
selected_summary.to_csv('SFS_RF_selected_features_summary.csv', sep=',', header=True, index = False)
# re load saved dataframe and sort by score
filename = 'SFS_RF_selected_features_summary.csv'
selected_summary = pd.read_csv(filename)
selected_summary = selected_summary.set_index(['index'])
selected_summary.sort_values(by='avg_score', ascending=0).head()
# feature selection with highest score
selected_summary.iloc[44]['feature_idx']
slct = np.array([257, 3, 4, 6, 7, 8, 10, 12, 16, 273, 146, 19, 26, 27, 284, 285, 30, 34, 163, 1, 42, 179, 155, 181, 184, 58, 315, 190, 320, 193, 194, 203, 290, 80, 210, 35, 84, 90, 97, 18, 241, 372, 119, 120, 126])
slct
# isolate and save selected features
filename = 'engineered_features_validation_set2.csv'
training_data = pd.read_csv(filename)
X = training_data.drop(['Formation', 'Well Name'], axis=1)
Xs = X.iloc[:, slct]
Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1)
print np.shape(Xs), list(Xs)
Xs.to_csv('SFS_top45_selected_engineered_features_validation_set.csv', sep=',', index=False)
# feature selection with highest score
selected_summary.iloc[74]['feature_idx']
slct = np.array([257, 3, 4, 5, 6, 7, 8, 265, 10, 12, 13, 16, 273, 18, 19, 26, 27, 284, 285, 30, 34, 35, 1, 42, 304, 309, 313, 58, 315, 319, 320, 75, 80, 338, 84, 341, 89, 90, 92, 97, 101, 102, 110, 372, 119, 120, 122, 124, 126, 127, 138, 139, 146, 155, 163, 165, 167, 171, 177, 179, 180, 181, 184, 190, 193, 194, 198, 203, 290, 210, 211, 225, 241, 249, 253])
slct
# isolate and save selected features
filename = 'engineered_features_validation_set2.csv'
training_data = pd.read_csv(filename)
X = training_data.drop(['Formation', 'Well Name'], axis=1)
Xs = X.iloc[:, slct]
Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1)
print np.shape(Xs), list(Xs)
Xs.to_csv('SFS_top75_selected_engineered_features_validation_set.csv', sep=',', index=False)
Explanation: It looks like the score stabilizes after about 6 features, reaches a max at 16, then begins to taper off after about 70 features. We will save the top 45 and the top 75.
End of explanation |
8,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lesson 37
Step1: This code produces an incorrent response. The debugger can be used to go through the program line by line and find the errors.
Step2: The problem with the program was that input() stores its responses as 'strings', and therefore concatenates as a string, not an integer. Assing an int() fixes this problem
Step3: Stepping(s) into the next step of the program is more than just going to the next (n) or continuing (c) the program.
It actually goes into the code of the function being called at that line.
Step4: Tracing the function gains the following output
Step5: Stepping is useful, but can be really slow.
One way to let the program run normally until a an issue, i.e. breakpoints.
Coin Flip Program
Step6: Factorial Program
Create a program that can return the n! of a number (5! = 1 * 2 * 3 * 4 * 5 = 120).
Step7: Using continue(c) runs through every iteration, which means debugging this code would take a lot of work, since we'd have to go through every trial.
A faster option is merely to step right into the first if statement using the until(un) command.
Once at that point, you can also use the break('b') to set a breakpoint there for future jumping.
This continues the program until a new line is reached, i.e. by finishing the loop | Python Code:
def simpleAdd():
print('Enter the first nuber to add:')
first = input()
print('Enter the second number to add:')
second = input()
print('Enter the third number to add:')
third = input()
print('The sum is ' + first + second + third +'.')
simpleAdd()
Explanation: Lesson 37:
Using the Debugger
A debugger allows you to run a program line by line, in order to debug it at various stages.
NB: The ipdb module does not seem to play well in Jupyter, and may occasionally require a kernal restart.
In iPython, the ipdb module holds an interactive debugging tool that acts similarly to the IDLE editor used in the book.
It is started with idpb.set_trace(), and can take various arguments in the interactive prompt to provide information.
ipdb (iPython Debugger) Commands:
* the n(next) continues program execution to to the next line in the code.
* the s(step) steps to the next line, whether its in the code or in the function itself (i.e. instead print() or input() here.
* the c(continue) command continues until another breakpoint.
* The l(list) will list a larger portion of the code.
* The p(print) or pp(pretty print) will print an argument.
* The a(arguments) will return the current arguments.
* The j(jump) command jumps to a line of code, skipping any other lines.
* The q(quit) command quits the iPython debugger.
* The ?(help) shows all available commands.
Use Cases:
* pp locals() to pretty print local variables (requires pprint module.)
* pp globals() to pretty print global variables.
* Change variables while the program is running by editing the locals() on the fly; step back and forward through the code with n and j.
Simple Adding Program:
End of explanation
import ipdb # Import the iPython debugger
def simpleAdd():
ipdb.set_trace() # Invoke the iPython debugger
print('Enter the first nuber to add:')
first = input()
print('Enter the second number to add:')
second = input()
print('Enter the third number to add:')
third = input()
print('The sum is ' + first + second + third +'.')
simpleAdd()
Explanation: This code produces an incorrent response. The debugger can be used to go through the program line by line and find the errors.
End of explanation
#import ipdb # Import the iPython debugger
def simpleAdd():
#ipdb.set_trace() # Invoke the iPython debugger
print('Enter the first number to add:')
first = int(input())
print('Enter the second number to add:')
second = int(input())
print('Enter the third number to add:')
third = int(input())
print('The sum is ' + str(first + second + third) +'.')
simpleAdd()
Explanation: The problem with the program was that input() stores its responses as 'strings', and therefore concatenates as a string, not an integer. Assing an int() fixes this problem:
End of explanation
def blah():
print('blah')
print('blah')
print('blah')
moreblah()
print('blah')
print('blah')
print('blah')
evenmoreblah()
def moreblah():
print('more blah')
print('more blah')
print('more blah')
evenmoreblah()
def evenmoreblah():
print('even more blah')
print(blah())
Explanation: Stepping(s) into the next step of the program is more than just going to the next (n) or continuing (c) the program.
It actually goes into the code of the function being called at that line.
End of explanation
import ipdb # Import iPython Debugger
def blah():
ipdb.set_trace() # Invoke the iPython debugger
print('blah')
print('blah')
print('blah')
moreblah()
print('blah')
print('blah')
print('blah')
evenmoreblah()
def moreblah():
print('more blah')
print('more blah')
print('more blah')
evenmoreblah()
def evenmoreblah():
print('even more blah')
print(blah())
Explanation: Tracing the function gains the following output:
End of explanation
import random
heads = 0
for i in range(1,1001):
if random.randint(0, 1) == 1:
heads = heads + 1
if i == 500:
print('Halfway done!')
print('Heads came up ' + str(heads) + ' times.')
Explanation: Stepping is useful, but can be really slow.
One way to let the program run normally until a an issue, i.e. breakpoints.
Coin Flip Program:
End of explanation
import ipdb # Import iPython Debugger
import random
heads = 0
for i in range(1,1001):
ipdb.set_trace() # Invoke the iPython debugger
if random.randint(0, 1) == 1:
heads = heads + 1
if i == 500:
print('Halfway done!')
print('Heads came up ' + str(heads) + ' times.')
Explanation: Factorial Program
Create a program that can return the n! of a number (5! = 1 * 2 * 3 * 4 * 5 = 120).
End of explanation
import ipdb # Import iPython Debugger
import random
heads = 0
for i in range(1,1001):
ipdb.set_trace() # Invoke the iPython debugger
if random.randint(0, 1) == 1:
heads = heads + 1
if i == 500:
print('Halfway done!')
print('Heads came up ' + str(heads) + ' times.')
Explanation: Using continue(c) runs through every iteration, which means debugging this code would take a lot of work, since we'd have to go through every trial.
A faster option is merely to step right into the first if statement using the until(un) command.
Once at that point, you can also use the break('b') to set a breakpoint there for future jumping.
This continues the program until a new line is reached, i.e. by finishing the loop:
End of explanation |
8,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises about Numpy and MLLib Data Types
Notebook version
Step1: 1. Objectives
This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will start by reviewing Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning. The second part of the notebook will present some of the data types inherent to MLlib, and explain the basics of distributing data sets for parallel optimization of models
2. Numpy exercises
2.1. Create numpy arrays of different types
The following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x. Use python command map() to create a new list with the same elements as x, but where each element of the list is a float.
Step2: Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type.
You can easily check the shape of any numpy vector with the property .shape, and reshape it with the method reshape(). Note the difference between 1-D and N-D numpy arrays (ndarrays). You should also be aware of the existance of another numpy data type
Step3: Some other useful Numpy methods are
Step4: 2.2. Products and powers of numpy arrays and matrices
* and ** when used with Numpy arrays implement elementwise product and exponentiation
* and ** when used with Numpy matrices implement matrix product and exponentiation
Method np.dot() implements matrix multiplication, and can be used both with numpy arrays and matrices.
So you have to be careful about the types you are using for each variable
Step5: 2.3. Numpy methods that can be carried out along different dimensions
Compare the result of the following commands
Step6: Other numpy methods where you can specify the axis along with a certain operation should be carried out are
Step7: 2.5. Slicing
Particular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along the different dimensions at once
Step8: 2.6 Matrix inversion
Non singular matrices can be inverted with method np.linalg.inv(). Invert square matrices $X\cdot X^\top$ and $X^\top \cdot X$, and see what happens when trying to invert a singular matrix. The rank of a matrix can be studied with method numpy.linalg.matrix_rank().
Step9: 2.7 Exercises
In this section, you will complete three exercises where you will carry out some common operations when working with data structures. For this exercise you will work with the 2-D numpy array X, assuming that it contains the values of two different variables for 8 data patterns. A first column of ones has already been introduced in a previous exercise
Step10: 2.7.1. Non-linear transformations
Create a new matrix Z, where additional features are created by carrying out the following non-linear transformations
Step11: If you did not do that, repeat the previous exercise, this time using the map() method together with function log_transform()
Step12: Repeat the previous exercise once again using a lambda function
Step13: 2.7.2. Polynomial transformations
Similarly to the previous exercise, now we are interested in obtaining another matrix that will be used to evaluate a polynomial model. In order to do so, compute matrix Z_poly as follows
Step14: 2.7.3. Model evaluation
Finally, we can use previous data matrices Z and Z_poly to efficiently compute the output of the corresponding non-linear models over all the patterns in the data set. In this exercise, we consider the two following linear-in-the-parameters models to be evaluated
Step15: 3. MLlib Data types
MLlib is Apache Spark's scalable machine learning library. It implements several machine learning methods that can work over data distributed by means of RDDs. The regression methods that are part of MLlib are
Step16: DenseVectors can be created from lists or from numpy arrays
SparseVector constructor requires three arguments
Step17: 3.2. Labeled point
An associaation of a local vector and a label
The label is a double (also in classification)
Supervised MLlib methods rely on datasets of labeled points
In regression,the label can be any real number
In classification, labels are class indices starting from zero | Python Code:
# Import some libraries that will be necessary for working with data and displaying plots
# To visualize plots in the notebook
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import scipy.io # To read matlab files
import pylab
from test_helper import Test
Explanation: Exercises about Numpy and MLLib Data Types
Notebook version: 1.0 (Mar 15, 2016)
Author: Jerónimo Arenas García ([email protected])
Changes: v.1.0 - First version
Pending changes: *
End of explanation
x = [5, 4, 3, 4]
print type(x[0])
# Create a list of floats containing the same elements as in x
x_f = <FILL IN>
Test.assertTrue(np.all(x == x_f), 'Elements of both lists are not the same')
Test.assertTrue(((type(x[-2])==int) & (type(x_f[-2])==float)),'Type conversion incorrect')
Explanation: 1. Objectives
This notebook reviews some of the Python modules that make it possible to work with data structures in an easy an efficient manner. We will start by reviewing Numpy arrays and matrices, and some of the common operations which are needed when working with these data structures in Machine Learning. The second part of the notebook will present some of the data types inherent to MLlib, and explain the basics of distributing data sets for parallel optimization of models
2. Numpy exercises
2.1. Create numpy arrays of different types
The following code fragment defines variable x as a list of 4 integers, you can check that by printing the type of any element of x. Use python command map() to create a new list with the same elements as x, but where each element of the list is a float.
End of explanation
# Numpy arrays can be created from numeric lists or using different numpy methods
y = np.arange(8)+1
x = np.array(x_f)
# Check the different data types involved
print 'El tipo de la variable x_f es ', type(x_f)
print 'El tipo de la variable x es ', type(x)
print 'El tipo de la variable y es ', type(y)
# Print the shapes of the numpy arrays
print 'La variable y tiene dimensiones ', y.shape
print 'La variable x tiene dimensiones ', x.shape
#Complete the following exercises
# Convert x into a variable x_matrix, of type `numpy.matrixlib.defmatrix.matrix` using command
# np.matrix(). The resulting matrix should be of dimensions 4x1
x_matrix = <FILL IN>
# Convert x into a variable x_array, of type `ndarray`, and dimensions 4x2
x_array = <FILL IN>
# Reshape array y into a 4x2 matrix using command np.reshape()
y = <FILL IN>
Test.assertEquals(type(x_matrix),np.matrixlib.defmatrix.matrix,'x_matrix is not defined as a matrix')
Test.assertEqualsHashed(x_matrix,'f4239d385605dc62b73c9a6f8945fdc65e12e43b','Incorrect variable x_matrix')
Test.assertEquals(type(x_array),np.ndarray,'x_array is not defined as a numpy ndarray')
Test.assertEqualsHashed(x_array,'f4239d385605dc62b73c9a6f8945fdc65e12e43b','Incorrect variable x_array')
Test.assertEquals(type(y),np.ndarray,'y is not defined as a numpy ndarray')
Test.assertEqualsHashed(y,'66d90401cb8ed9e1b888b76b0f59c23c8776ea42','Incorrect variable y')
Explanation: Numpy arrays can be defined directly using methods such as np.arange(), np.ones(), np.zeros(), as well as random number generators. Alternatively, you can easily generate them from python lists (or lists of lists) containing elements of numeric type.
You can easily check the shape of any numpy vector with the property .shape, and reshape it with the method reshape(). Note the difference between 1-D and N-D numpy arrays (ndarrays). You should also be aware of the existance of another numpy data type: Numpy matrices (http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.matrix.html) are inherently 2-D structures where operators * and ** have the meaning of matrix multiplication and matrix power.
In the code below, you can check the types and shapes of different numpy arrays. Complete also the exercise where you are asked to convert a unidimensional array into a vector of size $4\times2$.
End of explanation
print 'Uso de flatten sobre la matriz x_matrix (de tipo matrix)'
print 'x_matrix.flatten(): ', x_matrix.flatten()
print 'Su tipo es: ', type(x_matrix.flatten())
print 'Sus dimensiones son: ', x_matrix.flatten().shape
print '\nUso de flatten sobre la matriz y (de tipo ndarray)'
print 'x_matrix.flatten(): ', y.flatten()
print 'Su tipo es: ', type(y.flatten())
print 'Sus dimensiones son: ', y.flatten().shape
print '\nUso de tolist sobre la matriz x_matrix (de tipo matrix) y el vector (2D) y (de tipo ndarray)'
print 'x_matrix.tolist(): ', x_matrix.tolist()
print 'y.tolist(): ', y.tolist()
Explanation: Some other useful Numpy methods are:
np.flatten(): converts a numpy array or matrix into a vector by concatenating the elements in the different dimension. Note that the result of the method keeps the type of the original variable, so the result is a 1-D ndarray when invoked on a numpy array, and a numpy matrix (and necessarily 2-D) when invoked on a matrix.
np.tolist(): converts a numpy array or matrix into a python list.
These uses are illustrated in the code fragment below.
End of explanation
# Try to run the following command on variable x_matrix, and see what happens
print x_array**2
# Try to run the following command on variable x_matrix, and see what happens
print 'Remember that the shape of x_array is ', x_array.shape
print 'Remember that the shape of y is ', y.shape
# Complete the following exercises. You can print the partial results to visualize them
# Multiply the 2-D array `y` by 2
y_by2 = <FILL IN>
# Multiply each of the columns in `y` by the column vector x_array
z_4_2 = <FILL IN>
# Obtain the matrix product of the transpose of x_array and y
x_by_y = <FILL IN>
# Repeat the previous calculation, this time using x_matrix (of type numpy matrix) instead of x_array
# Note that in this case you do not need to use method dot()
x_by_y2 = <FILL IN>
# Multiply vector x_array by its transpose to obtain a 4 x 4 matrix
x_4_4 = <FILL IN>
# Multiply the transpose of vector x_array by vector x_array. The result is the squared-norm of the vector
x_norm2 = <FILL IN>
Test.assertEqualsHashed(y_by2,'120a3a46cdf65dc239cc9b128eb1336886c7c137','Incorrect result for variable y_by2')
Test.assertEqualsHashed(z_4_2,'607730d96899ee27af576ecc7a4f1105d5b2cfed','Incorrect result for variable z_4_2')
Test.assertEqualsHashed(x_by_y,'a3b24f229d1e02fa71e940adc0a4135779864358','Incorrect result for variable x_by_y')
Test.assertEqualsHashed(x_by_y2,'a3b24f229d1e02fa71e940adc0a4135779864358','Incorrect result for variable x_by_y2')
Test.assertEqualsHashed(x_4_4,'fff55c032faa93592e5d27bf13da9bb49c468687','Incorrect result for variable x_4_4')
Test.assertEqualsHashed(x_norm2,'6eacac8f346bae7b5c72bcc3381c7140eaa98b48','Incorrect result for variable x_norm2')
Explanation: 2.2. Products and powers of numpy arrays and matrices
* and ** when used with Numpy arrays implement elementwise product and exponentiation
* and ** when used with Numpy matrices implement matrix product and exponentiation
Method np.dot() implements matrix multiplication, and can be used both with numpy arrays and matrices.
So you have to be careful about the types you are using for each variable
End of explanation
print z_4_2.shape
print np.mean(z_4_2)
print np.mean(z_4_2,axis=0)
print np.mean(z_4_2,axis=1)
Explanation: 2.3. Numpy methods that can be carried out along different dimensions
Compare the result of the following commands:
End of explanation
# Previous check that you are working with the right matrices
Test.assertEqualsHashed(z_4_2,'607730d96899ee27af576ecc7a4f1105d5b2cfed','Wrong value for variable z_4_2')
Test.assertEqualsHashed(x_array,'f4239d385605dc62b73c9a6f8945fdc65e12e43b','Wrong value for variable x_array')
# Vertically stack matrix z_4_2 with itself
ex1_res = <FILL IN>
# Horizontally stack matrix z_4_2 and vector x_array
ex2_res = <FILL IN>
# Horizontally stack a column vector of ones with the result of the first exercise (variable ex1_res)
X = <FILL IN>
Test.assertEqualsHashed(ex1_res,'31e60c0fa3e3accedc7db24339452085975a6659','Wrong value for variable ex1_res')
Test.assertEqualsHashed(ex2_res,'189b90c5b2113d2415767915becb58c6525519b7','Wrong value for variable ex2_res')
Test.assertEqualsHashed(X,'426c2708350ac469bc2fc4b521e781b36194ba23','Wrong value for variable X')
Explanation: Other numpy methods where you can specify the axis along with a certain operation should be carried out are:
np.median()
np.std()
np.var()
np.percentile()
np.sort()
np.argsort()
If the axis argument is not provided, the array is flattened before carriying out the corresponding operation.
2.4. Concatenating matrices and vectors
Provided that the necessary dimensions fit, horizontal and vertical stacking of matrices can be carried out with methods np.hstack() and np.vstack().
Complete the following exercises to practice with matrix concatenation:
End of explanation
# Keep last row of matrix X
X_sub1 = <FILL IN>
# Keep first column of the three first rows of X
X_sub2 = <FILL IN>
# Keep first two columns of the three first rows of X
X_sub3 = <FILL IN>
# Invert the order of the rows of X
X_sub4 = <FILL IN>
Test.assertEqualsHashed(X_sub1,'0bcf8043a3dd569b31245c2e991b26686305b93f','Wrong value for variable X_sub1')
Test.assertEqualsHashed(X_sub2,'7c43c1137480f3bfea7454458fcfa2bc042630ce','Wrong value for variable X_sub2')
Test.assertEqualsHashed(X_sub3,'3cddc950ea2abc256192461728ef19d9e1d59d4c','Wrong value for variable X_sub3')
Test.assertEqualsHashed(X_sub4,'33190dec8f3cbe3ebc9d775349665877d7b892dd','Wrong value for variable X_sub4')
Explanation: 2.5. Slicing
Particular elements of numpy arrays (both unidimensional and multidimensional) can be accessed using standard python slicing. When working with multidimensional arrays, slicing can be carried out along the different dimensions at once
End of explanation
print X.shape
print X.dot(X.T)
print X.T.dot(X)
print np.linalg.inv(X.T.dot(X))
#print np.linalg.inv(X.dot(X.T))
Explanation: 2.6 Matrix inversion
Non singular matrices can be inverted with method np.linalg.inv(). Invert square matrices $X\cdot X^\top$ and $X^\top \cdot X$, and see what happens when trying to invert a singular matrix. The rank of a matrix can be studied with method numpy.linalg.matrix_rank().
End of explanation
Test.assertEqualsHashed(X,'426c2708350ac469bc2fc4b521e781b36194ba23','Wrong value for variable X')
Explanation: 2.7 Exercises
In this section, you will complete three exercises where you will carry out some common operations when working with data structures. For this exercise you will work with the 2-D numpy array X, assuming that it contains the values of two different variables for 8 data patterns. A first column of ones has already been introduced in a previous exercise:
$$X = \left[ \begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} \ 1 & x_1^{(2)} & x_2^{(2)} \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & x_2^{(8)}\end{array}\right]$$
First of all, let us check that you are working with the right matrix
End of explanation
# Obtain matrix Z
Z = <FILL IN>
Test.assertEqualsHashed(Z,'d68d0394b57b4583ba95fc669c1c12aeec782410','Incorrect matrix Z')
Explanation: 2.7.1. Non-linear transformations
Create a new matrix Z, where additional features are created by carrying out the following non-linear transformations:
$$Z = \left[ \begin{array}{ccc} 1 & x_1^{(1)} & x_2^{(1)} & \log\left(x_1^{(1)}\right) & \log\left(x_2^{(1)}\right)\ 1 & x_1^{(2)} & x_2^{(2)} & \log\left(x_1^{(2)}\right) & \log\left(x_2^{(2)}\right) \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & x_2^{(8)} & \log\left(x_1^{(8)}\right) & \log\left(x_2^{(8)}\right)\end{array}\right] = \left[ \begin{array}{ccc} 1 & z_1^{(1)} & z_2^{(1)} & z_3^{(1)} & z_4^{(1)}\ 1 & z_1^{(2)} & z_2^{(2)} & z_3^{(1)} & z_4^{(1)} \ \vdots & \vdots & \vdots \ 1 & z_1^{(8)} & z_2^{(8)} & z_3^{(1)} & z_4^{(1)} \end{array}\right]$$
In other words, we are calculating the logarightmic values of the two original variables. From now on, any function involving linear transformations of the variables in Z, will be in fact a non-linear function of the original variables.
End of explanation
def log_transform(x):
return <FILL IN>
Z_map = np.array(map(log_transform,X))
Test.assertEqualsHashed(Z_map,'d68d0394b57b4583ba95fc669c1c12aeec782410','Incorrect matrix Z')
Explanation: If you did not do that, repeat the previous exercise, this time using the map() method together with function log_transform():
End of explanation
Z_lambda = np.array(map(lambda x: <FILL IN>,X))
Test.assertEqualsHashed(Z_lambda,'d68d0394b57b4583ba95fc669c1c12aeec782410','Incorrect matrix Z')
Explanation: Repeat the previous exercise once again using a lambda function:
End of explanation
# Calculate variable Z_poly, using any method that you want
Z_poly = <FILL IN>
Test.assertEqualsHashed(Z_poly,'ba0f38316dffe901b6c7870d13ccceccebd75201','Wrong variable Z_poly')
Explanation: 2.7.2. Polynomial transformations
Similarly to the previous exercise, now we are interested in obtaining another matrix that will be used to evaluate a polynomial model. In order to do so, compute matrix Z_poly as follows:
$$Z_\text{poly} = \left[ \begin{array}{cccc} 1 & x_1^{(1)} & (x_1^{(1)})^2 & (x_1^{(1)})^3 \ 1 & x_1^{(2)} & (x_1^{(2)})^2 & (x_1^{(2)})^3 \ \vdots & \vdots & \vdots \ 1 & x_1^{(8)} & (x_1^{(8)})^2 & (x_1^{(8)})^3 \end{array}\right]$$
Note that, in this case, only the first variable of each pattern is used.
End of explanation
w_log = np.array([3.3, 0.5, -2.4, 3.7, -2.9])
w_poly = np.array([3.2, 4.5, -3.2, 0.7])
f_log = <FILL IN>
f_poly = <FILL IN>
Test.assertEqualsHashed(f_log,'cf81496c5371a0b31931625040f460ed3481fb3d','Incorrect evaluation of the logarithmic model')
Test.assertEqualsHashed(f_poly,'05307e30124daa103c970044828f24ee8b1a0bb9','Incorrect evaluation of the polynomial model')
Explanation: 2.7.3. Model evaluation
Finally, we can use previous data matrices Z and Z_poly to efficiently compute the output of the corresponding non-linear models over all the patterns in the data set. In this exercise, we consider the two following linear-in-the-parameters models to be evaluated:
$$f_\text{log}({\bf x}) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_2 + w_3 \cdot \log(x_1) + w_4 \cdot \log(x_2)$$
$$f_\text{poly}({\bf x}) = w_0 + w_1 \cdot x_1 + w_2 \cdot x_1^2 + w_3 \cdot x_1^3$$
Compute the output of the two models for the particular weights that are defined in the code below. Your output variables f_log and f_poly should contain the outputs of the model for all eight patterns in the data set.
End of explanation
# Import additional libraries for this part
from pyspark.mllib.linalg import DenseVector
from pyspark.mllib.linalg import SparseVector
from pyspark.mllib.regression import LabeledPoint
Explanation: 3. MLlib Data types
MLlib is Apache Spark's scalable machine learning library. It implements several machine learning methods that can work over data distributed by means of RDDs. The regression methods that are part of MLlib are:
linear least squares
Lasso
ridge regression
isotonic regression
random forests
gradient-boosted trees
We will just use the three first methods, and we will also work on an implementation of KNN regression over Spark, using the Data types provided by MLlib.
3.1. Local Vectors
Integer-typed and 0-based indices
Double-typed values
Stored on a single machine
Two kinds of vectors provided:
DenseVector: a double array with the entries values
SparseVector: backed up by two parallel arrays: indices and values
<img src="./figs/vector_representation.jpg" width="80%">
End of explanation
# We create a sparse vector of length 900, with only 25 non-zero values
Z = np.eye(30, k=5).flatten()
print 'The dimension of array Z is ', Z.shape
# Create a DenseVector containing the elements of array Z
dense_V = <FILL IN>
#Create a SparseVector containing the elements of array Z
#Nonzero elements are indexed by the following variable idx_nonzero
idx_nonzero = np.nonzero(Z)[0]
sparse_V = <FILL IN>
#Standard matrix operations can be computed on DenseVectors and SparseVectors
#Calculate the square norm of vector sparse_V, by multiplying sparse_V by the transponse of dense_V
print 'The norm of vector Z is', sparse_V.dot(dense_V)
#print sparse_V
#print dense_V
Test.assertEqualsHashed(dense_V,'b331f43b23fda1ac19f5c29ee2c843fab6e34dfa', 'Incorrect vector dense_V')
Test.assertEqualsHashed(sparse_V,'954fe70f3f9acd720219fc55a30c7c303d02f05d', 'Incorrect vector sparse_V')
Test.assertEquals(type(dense_V),pyspark.mllib.linalg.DenseVector,'Incorrect type for dense_V')
Test.assertEquals(type(sparse_V),pyspark.mllib.linalg.SparseVector,'Incorrect type for sparse_V')
Explanation: DenseVectors can be created from lists or from numpy arrays
SparseVector constructor requires three arguments: the length of the vector, an array with the indices of the non-zero coefficients, and the values of such positions (in the same order)
End of explanation
# Create a labeled point with a positive label and a dense feature vector.
pos = LabeledPoint(1.0, [1.0, 0.0, 3.0])
# Create a labeled point with a negative label and a sparse feature vector.
neg = LabeledPoint(0.0, sparse_V)
# You can now easily access the label and features of the vector:
print 'The label of the first labeled point is', pos.label
print 'The features of the second labeled point are', neg.features
Explanation: 3.2. Labeled point
An associaation of a local vector and a label
The label is a double (also in classification)
Supervised MLlib methods rely on datasets of labeled points
In regression,the label can be any real number
In classification, labels are class indices starting from zero: 0, 1, 2, ...
Labeled point constructor takes two arguments: the labels, and a numpy array / DenseVector / SparseVector containing the features.
End of explanation |
8,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lecture 2
Step1: Problem 2
Step2: C
Step3: D
Step5: <a id='prob1ans'></a>
E | Python Code:
# To begin, define the prior as the probability of the car being behind door i (i=1,2,3), call this "pi".
# Note that pi is uniformly distributed.
p1 = 1/3.
p2 = 1/3.
p3 = 1/3.
# Next, to define the class conditional, we need three pieces of information. Supposing Monty reveals door 3,
# we must find:
# probability that Monty reveals door 3 given door 3 wins (call this c3)
# probability that Monty reveals door 3 given door 2 wins (call this c2)
# probability that Monty reveals door 3 given door 1 wins (call this c1)
#
# For this, suppose you initially choose door 1.
c3 = 0
c2 = 1.
c1 = 1/2.
#Now we need find the marginal for the choice of Monty, call this pd3. Hint: use the sum rule of probability and
# your previous calculations.
pd3 = c3*p3 + c2*p2 + c1*p1
# The probability of winning if you stay with door 1 is:
print("Door 1: %(switch1).2f %%" %{"switch1":100*(c1*p1)/pd3})
# Finally, Bayes' rule tells us the probability of winning if you switch to door 2 is:
print("Door 2: %(switch2).2f %%" %{"switch2":100*(c2*p2)/pd3})
# The probability of winning if you switch to door 3 is:
print("Door 3: %(switch3).2f %%" %{"switch3":100*(c3*p3)/pd3})
Explanation: Lecture 2: Naive Bayes
<img src="files/figs/bayes.jpg",width=1201,height=50>
<!---

-->
<a id='prob1'> you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1. The host (Monty), who knows what's behind each door, reveals one that has a goat behind it. He then asks if you'd like to change your choice. Is it to your advantage to switch doors? (Here we implilcitly assume you want the car more than a goat)
<img src="https://cdn-images-1.medium.com/max/1600/1*fSv7k4vXkOYp8RN7lVeKyA.jpeg",width=500,height=250>
Problem 1: Bayes Law and The Monte Hall Problem
Suppose you're on a game show, and you're given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1. The host (Monty), who knows what's behind each door, reveals one that has a goat behind it. He then asks if you'd like to change your choice. Is it to your advantage to switch doors? (Here we implilcitly assume you want the car more than a goat)
<img src="https://cdn-images-1.medium.com/max/1600/1*fSv7k4vXkOYp8RN7lVeKyA.jpeg",width=500,height=250>
A: What does your intuition say? Is it in your best interest to switch, or does it matter?
B: Using what we've learned about Bayes' rule, let's calculate the probability of winning if you switch or stay.
End of explanation
# Distribution 1
p1Plus = 7/12.0
p1Minus = 5/12.0
# Distribution 2
p2Plus = 1/12.0
p2Minus = 11/12.0
Explanation: Problem 2: Naive Bayes on Symbols
This problem was adopted from Naive Bayes and Text Classification I: Introduction and Theory by Sebastian Raschka and a script from the CU computer science department.
Consider the following training set of 12 symbols which have been labeled as either + or -:
<br>
<img src="files/figs/shapes.png?raw=true"; width=500>
<!---

-->
Answer the following questions:
A: What are the general features associated with each training example?
Answer: The two general types of features are shape and color. For this particular training set, the observed features are shape $\in$ {square, circle} and color $\in$ {red, blue, green}.
In the next part, we'll use Naive Bayes to classify the following test example:
<img src="files/figs/bluesquare.png"; width=200>
OK, so this symbol actually appears in the training set, but let's pretend that it doesn't.
The decision rule can be defined as
Classify ${\bf x}$ as + if <br>
$p(+ ~|~ {\bf x} = [blue,~ square]) \geq p(- ~|~ {\bf x} = [blue, ~square])$ <br>
else classify sample as -
B: To begin, let's explore the estimate of an appropriate prior for + and -. We'll define two distributions:<br>
For the first, use $$\hat{p}(+)=\frac{\text{# of +}}{\text{# of classified objects}} \text{ and } \hat{p}(-)=\frac{\text{# of -}}{\text{# of classified objects}}$$ <br>
For the second, reader's choice. Take anything such that $$\hat{p}(+)\ge 0\text{, }\hat{p}(-)\ge 0\text{, and }\hat{p}(+)+\hat{p}(-)=1$$
End of explanation
# Class-conditional probabilities
pBplus = 3/7.0
pBminus = 3/5.0
pSplus = 5/7.0
pSminus = 3/5.0
Explanation: C: Assuming the features are conditionally independent of the class, identify and compute estimates of the class-conditional probabilities required to predict the class of ${\bf x} = [blue,~square]$?
Answer: The class-conditional probabilities required to classify ${\bf x} = [blue, ~square]$ are
$$
p(blue ~|~ +), ~~~~~ p(blue ~|~ -), ~~~~~ p(square ~|~ +), ~~~~~ p(square ~|~ -)
$$
From the training set, we have
$$
\hat{p}(blue ~|~ +)= \frac{3}{7}, ~~~~~ \hat{p}(blue ~|~ -) = \frac{3}{5}, ~~~~~ \hat{p}(square ~|~ +)=\frac{5}{7}, ~~~~~ \hat{p}(square ~|~ -) = \frac{3}{5}
$$
End of explanation
#Start a section for the results under prior 1
scores1=[(pBplus*pSplus*p1Plus,'+'),(pBminus*pSminus*p1Minus,'-')]
class1 = list(max(scores1))
#Beginning of results
print('\033[1m'+"Results under prior 1" + '\033[0m')
# Posterior score for + under prior 1
print("Posterior score for + under prior 1 is $ %(postPlus).2f" %{"postPlus":scores1[0][0]})
# Posterior score for - under prior 1
print("Posterior score for - under prior 1 is $ %(postMinus).2f" %{"postMinus":scores1[1][0]})
# Classification under prior 1
print("The object is then of class %s" %class1[1])
#Start a section for the results under prior 2
scores2=[(pBplus*pSplus*p2Plus,'+'),(pBminus*pSminus*p2Minus,'-')]
class2 = list(max(scores2))
#Beginning of results
print('\033[1m'+"Results under prior 2" + '\033[0m')
# Posterior score for + under prior 2
print("Posterior score for + under prior 2 is $ %(postPlus).2f" %{"postPlus":scores2[0][0]})
# Posterior score for - under prior 2
print("Posterior score for - under prior 2 is $ %(postMinus).2f" %{"postMinus":scores2[1][0]})
# Classification under prior 2
print("The object is then of class %s" %class2[1])
Explanation: D: Using the estimates computed above, compute the posterior scores for each label, and find the Naive Bayes prediction of the label for ${\bf x} = [blue,~square]$.
End of explanation
from IPython.core.display import HTML
HTML(
<style>
.MathJax nobr>span.math>span{border-left-width:0 !important};
</style>
)
from IPython.display import Image
Explanation: <a id='prob1ans'></a>
E: If you haven't already, compute the class-conditional probabilities scores $\hat{p}({\bf x} = [blue,~square] ~|~ +)$ and $\hat{p}({\bf x} = [blue,~square] ~|~ -)$ under the Naive Bayes assumption. How can you reconsile these values with the final prediction that would made?
Answer: The class-conditional probability scores under the Naive Bayes assumption are
$$
\hat{p}({\bf x} = [blue,~square] ~|~ +) = \hat{p}(blue ~|~ +) \cdot \hat{p}(square ~|~ +) = \frac{3}{7} \cdot \frac{5}{7} = 0.31
$$
$$
\hat{p}({\bf x} = [blue,~square] ~|~ -) = \hat{p}(blue ~|~ -) \cdot \hat{p}(square ~|~ -) = \frac{3}{5} \cdot \frac{3}{5} = 0.36
$$
The - label actually has a higher class-conditional probability for ${\bf x}$ than the + label. We ended up predicting the + label because the prior for + was larger than the prior for -. This example demonstrates how the choice of prior can have a large influence on the prediction.
<br><br><br><br>
<br><br><br><br>
<br><br><br><br>
<br><br><br><br>
Helper Functions
End of explanation |
8,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Project 2
Step1: Now, can you find out the following facts about the dataset?
- Total number of students
- Number of students who passed
- Number of students who failed
- Graduation rate of the class (%)
- Number of features
Use the code block below to compute these values. Instructions/steps are marked using TODOs.
Step2: 3. Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Let's first separate our data into feature and target columns, and see if any features are non-numeric.<br/>
Note
Step3: Preprocess feature columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation.
Step4: Split data into training and test sets
So far, we have converted all categorical features into numeric values. In this next step, we split the data (both features and corresponding labels) into training and test sets.
Step5: 4. Training and Evaluating Models
Choose 3 supervised learning models that are available in scikit-learn, and appropriate for this problem. For each model
Step6: 5. Choosing the Best Model
Based on the experiments you performed earlier, in 1-2 paragraphs explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
In 1-2 paragraphs explain to the board of supervisors in layman's terms how the final model chosen is supposed to work (for example if you chose a Decision Tree or Support Vector Machine, how does it make a prediction).
Fine-tune the model. Use Gridsearch with at least one important parameter tuned and with at least 3 settings. Use the entire training set for this.
What is the model's final F<sub>1</sub> score?
Step7: 6. Training logistic regression model with the whole training set | Python Code:
# Import libraries
import numpy as np
import pandas as pd
# Read student data
student_data = pd.read_csv("student-data.csv")
print "Student data read successfully!"
# Note: The last column 'passed' is the target/label, all other are feature columns
Explanation: Project 2: Supervised Learning
Building a Student Intervention System
1. Classification vs Regression
Your goal is to identify students who might need early intervention - which type of supervised machine learning problem is this, classification or regression? Why?
2. Exploring the Data
Let's go ahead and read in the student dataset first.
To execute a code cell, click inside it and press Shift+Enter.
End of explanation
# TODO: Compute desired values - replace each '?' with an appropriate expression/function call
n_students = student_data.shape[0]
n_features = student_data.shape[1]-1
y_df = student_data['passed']
n_passed = y_df[y_df=='yes'].shape[0]
n_failed = n_students - n_passed
grad_rate = 100.0 * n_passed / n_students
print "Total number of students: {}".format(n_students)
print "Number of students who passed: {}".format(n_passed)
print "Number of students who failed: {}".format(n_failed)
print "Number of features: {}".format(n_features)
print "Graduation rate of the class: {:.2f}%".format(grad_rate)
Explanation: Now, can you find out the following facts about the dataset?
- Total number of students
- Number of students who passed
- Number of students who failed
- Graduation rate of the class (%)
- Number of features
Use the code block below to compute these values. Instructions/steps are marked using TODOs.
End of explanation
# %%capture
# Extract feature (X) and target (y) columns
feature_cols = list(student_data.columns[:-1]) # all columns but last are features
target_col = student_data.columns[-1] # last column is the target/label
print "Feature column(s):-\n{}".format(feature_cols)
print "Target column: {}".format(target_col)
X_all = student_data[feature_cols] # feature values for all students
y_all = student_data[target_col] # corresponding targets/labels
print "\nFeature values:-"
print X_all.head() # print the first 5 rows
Explanation: 3. Preparing the Data
In this section, we will prepare the data for modeling, training and testing.
Identify feature and target columns
It is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.
Let's first separate our data into feature and target columns, and see if any features are non-numeric.<br/>
Note: For this dataset, the last column ('passed') is the target or label we are trying to predict.
End of explanation
# Preprocess feature columns
def preprocess_features(X):
# output dataframe, initially empty
outX = pd.DataFrame(index=X.index)
# Check each column
for col, col_data in X.iteritems():
# If data type is non-numeric, try to replace all yes/no values with 1/0
if col_data.dtype == object:
col_data = col_data.replace(['yes', 'no'], [1, 0])
# Note: This should change the data type for yes/no columns to int
# If still non-numeric, convert to one or more dummy variables
if col_data.dtype == object:
col_data = pd.get_dummies(col_data, prefix=col) # e.g. 'school' => 'school_GP', 'school_MS'
outX = outX.join(col_data) # collect column(s) in output dataframe
return outX
X_all = preprocess_features(X_all)
# X_all = pd.get_dummies(X_all)
print "Processed feature columns ({}):-\n{}".format(len(X_all.columns), list(X_all.columns))
Explanation: Preprocess feature columns
As you can see, there are several non-numeric columns that need to be converted! Many of them are simply yes/no, e.g. internet. These can be reasonably converted into 1/0 (binary) values.
Other columns, like Mjob and Fjob, have more than two values, and are known as categorical variables. The recommended way to handle such a column is to create as many columns as possible values (e.g. Fjob_teacher, Fjob_other, Fjob_services, etc.), and assign a 1 to one of them and 0 to all others.
These generated columns are sometimes called dummy variables, and we will use the pandas.get_dummies() function to perform this transformation.
End of explanation
from sklearn.cross_validation import train_test_split
# First, decide how many training vs test samples you want
num_all = student_data.shape[0] # same as len(student_data)
num_train = 300 # about 75% of the data
num_test = num_all - num_train
# TODO: Then, select features (X) and corresponding labels (y) for the training and test sets
# Note: Shuffle the data or randomly select samples to avoid any bias due to ordering in the dataset
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, train_size=num_train, random_state=11)
# Preserve this train/test split for final evaluation of model F1 score
X_train_initial, X_test_initial, y_train_initial, y_test_initial = X_train, X_test, y_train, y_test
print "Training set: {} samples".format(X_train.shape[0])
print "Test set: {} samples".format(X_test.shape[0])
# Note: If you need a validation set, extract it from within training data
Explanation: Split data into training and test sets
So far, we have converted all categorical features into numeric values. In this next step, we split the data (both features and corresponding labels) into training and test sets.
End of explanation
# Train a model
import time
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.svm import SVC
from sklearn.metrics import f1_score
def train_classifier(clf, X_train, y_train):
print "\nTraining {}...".format(clf.__class__.__name__)
start = time.time()
clf.fit(X_train, y_train)
end = time.time()
duration = end - start
print "Training time (secs): {:.4f}".format(duration)
return duration
def predict_labels(clf, features, target):
# print "Predicting labels using {}...".format(clf.__class__.__name__)
start = time.time()
y_pred = clf.predict(features)
end = time.time()
print "Prediction time (secs): {:.4f}".format(end - start)
return f1_score(target.values, y_pred, pos_label='yes')
def train_predict(clf, X_train, y_train, X_test, y_test):
print "----------"
print "Training set size: {}".format(len(X_train))
train_classifier(clf, X_train, y_train)
print "Training set:"
train_f1_score = predict_labels(clf, X_train, y_train)
print "Testing set:"
test_f1_score = predict_labels(clf, X_test, y_test)
print "F1 score for training set: {}".format(train_f1_score)
print "F1 score for test set: {}".format(test_f1_score)
return train_f1_score, test_f1_score
# TODO: Choose a model, import it and instantiate an object
# TODO: Run the helper function above for desired subsets of training data
clfs = [DecisionTreeClassifier(random_state=42),
KNeighborsClassifier(),
LogisticRegression(random_state=42)]
for clf in clfs:
print "============================================="
# Fit model to training data
train_classifier(clf, X_train, y_train) # note: using entire training set here
# Predict on training & testing set and compute F1 score
train_f1_score = predict_labels(clf, X_train, y_train)
test_f1_score = predict_labels(clf, X_test, y_test)
print "F1 score for training set: {}".format(train_f1_score)
print "F1 score for test set: {}".format(test_f1_score)
for idx, train_size in enumerate([100, 200, 300]):
X_train_temp = X_train.iloc[:train_size]
y_train_temp = y_train.iloc[:train_size]
train_predict(clf, X_train_temp, y_train_temp, X_test, y_test)
print "============================================="
# %%capture
# test the effect of training sample size on F1 score with a finer interval of 20 instead of 100
# the resutls are visualized in the next cell
# output in this cell is suppressed
train_f1_scores = []
test_f1_scores = []
for clf in clfs:
print "============================================="
# Fit model to training data
# note: using entire training set here
train_classifier(clf, X_train, y_train)
# Predict on training & testing set and compute F1 score
train_f1_score = predict_labels(clf, X_train, y_train)
test_f1_score = predict_labels(clf, X_test, y_test)
print "F1 score for training set: {}".format(train_f1_score)
print "F1 score for test set: {}".format(test_f1_score)
# Train and predict using different training set sizes
train_sizes = np.arange(20, X_train.shape[0]+1, 20)
train_f1_score = np.zeros(train_sizes.shape)
test_f1_score = np.zeros(train_sizes.shape)
for idx, train_size in enumerate(train_sizes):
X_train_temp = X_train.iloc[:train_size]
y_train_temp = y_train.iloc[:train_size]
train_f1_score[idx], test_f1_score[idx] = train_predict(clf, X_train_temp, y_train_temp, X_test, y_test)
# Collect f1 scores for each classifier
train_f1_scores.append(train_f1_score)
test_f1_scores.append(test_f1_score)
print "============================================="
# visualize F1 score vs training sample size
# seaborn settings from [http://bebi103.caltech.edu/2015/tutorials/t0b_intro_to_jupyter_notebooks.html]
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_formats = {'png', 'retina'}
rc = {'lines.linewidth': 2,
'axes.labelsize': 14,
'axes.titlesize': 14,
'axes.facecolor': 'DFDFE5'}
sns.set_context('notebook', font_scale=1.2, rc=rc)
sns.set_style('darkgrid', rc=rc)
plt.figure(1, figsize=(20, 5), dpi=300)
idx_subplot = 1
for idx, clf in enumerate(clfs):
# each subplot corresponds to a classifier
plt.subplot(1, len(clfs),idx_subplot)
plt.plot(train_sizes, train_f1_scores[idx], marker='o', label='F1 score ( train )')
plt.plot(train_sizes, test_f1_scores[idx], marker='s', label='F1 score ( test )')
if idx_subplot == 1: plt.ylabel('F1 score', fontweight='bold')
plt.xlabel('Training samples', fontweight='bold')
plt.title('%s' % clf.__class__.__name__, fontweight='bold')
plt.xlim(0, X_train.shape[0]+15)
plt.ylim(0.3, 1.05)
plt.yticks(np.arange(0.3, 1.05, 0.1))
plt.legend(loc='lower right')
idx_subplot += 1
plt.savefig('./F1_vs_training_size.pdf')
Explanation: 4. Training and Evaluating Models
Choose 3 supervised learning models that are available in scikit-learn, and appropriate for this problem. For each model:
What are the general applications of this model? What are its strengths and weaknesses?
Given what you know about the data so far, why did you choose this model to apply?
Fit this model to the training data, try to predict labels (for both training and test sets), and measure the F<sub>1</sub> score. Repeat this process with different training set sizes (100, 200, 300), keeping test set constant.
Produce a table showing training time, prediction time, F<sub>1</sub> score on training set and F<sub>1</sub> score on test set, for each training set size.
Note: You need to produce 3 such tables - one for each model.
End of explanation
%%capture
# Takes around 6 mins to run on a 4 Ghz, quad-core machine
# TODO: Fine-tune your model and report the best F1 score
import time
import numpy as np
import pandas as pd
from sklearn.cross_validation import train_test_split
from sklearn.ensemble import AdaBoostClassifier, RandomForestClassifier
from sklearn.grid_search import GridSearchCV
from sklearn.linear_model import SGDClassifier, LogisticRegression
from sklearn.metrics import f1_score, precision_score, recall_score, accuracy_score
from sklearn.preprocessing import LabelEncoder
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
# time the script
start = time.time()
# calc_scores (f1_score, accuracy_score, recall_score, precision_score)
def calc_scores(y, y_pred):
return (f1_score (y, y_pred),
accuracy_score (y, y_pred),
recall_score (y, y_pred),
precision_score(y, y_pred))
# import data
student_data = pd.read_csv("student-data.csv")
# extract feature (X) and target (y) columns
feature_cols = list(student_data.columns[:-1])
target_col = student_data.columns[-1]
le_y = LabelEncoder()
X_all = pd.get_dummies(student_data[feature_cols])
y_all = student_data[target_col]
y_all = le_y.fit_transform(y_all)
# initialize classifiers for evaluations of performance
clfs_set = [AdaBoostClassifier(),
DecisionTreeClassifier(),
KNeighborsClassifier(),
LogisticRegression(),
SVC(),
SGDClassifier(),
RandomForestClassifier()]
clfs_best = []
train_scores = []
test_scores = []
# building param_grids for GridSearchCV
ada_grid = {'algorithm': ['SAMME', 'SAMME.R'],
'n_estimators': np.linspace(1, 6, num=5).astype(int),
'learning_rate': (0.001, 0.01, 0.1, 1, 10)}
dt_grid = {'criterion': ['gini', 'entropy'],
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth': np.linspace(1, 10, num=10),
'min_samples_split': np.linspace(2, 10, 1),
'min_samples_leaf': (1, 2, 3, 4, 5)}
knn_grid = {'n_neighbors': (3, 4, 5, 6, 7, 8, 9),
'algorithm': ['auto', 'ball_tree', 'kd_tree'],
'p': (1, 2, 3, 4),
'leaf_size': (10, 20, 30, 40, 50),
'weights': ['uniform', 'distance']}
lr_grid = {'C': np.linspace(0.01, 0.2, num=200),
'penalty': ['l1', 'l2']}
svc_grid = {'kernel': ['rbf', 'poly'],
'gamma': np.linspace(0.01, 1, num=100)}
sgd_grid = {'loss': ['squared_hinge', 'hinge'],
'penalty': ['l2', 'l1'],
'alpha': np.linspace(0.001, 0.01, num=100)}
rf_grid = {'n_estimators': (10, 11, 12, 13, 14, 15, 16),
'max_features': ['auto'],
'criterion': ['gini', 'entropy'],
'max_depth': (3, 4, 5, 6),
'min_samples_split': (2, 3, 4, 5, 6)}
param_grids = [ada_grid, dt_grid, knn_grid, lr_grid, svc_grid, sgd_grid, rf_grid]
# run GridSearchCV for each classifier (maximizing f1-score)
# increase the train size to 80% sample size
num_runs = 25
num_clfs = len(clfs_set)
num_scores = 4
train_size = 0.80
for num_run in np.arange(num_runs):
# randomize train_split for each run
X_train, X_test, y_train, y_test = train_test_split(X_all, y_all, train_size=train_size)
print('===============================================================================')
print('Run #%d' % (num_run+1))
for clf, param_grid in zip(clfs_set, param_grids):
print("%s" % clf.__class__.__name__)
clf_opt = GridSearchCV(estimator=clf,
param_grid=param_grid,
scoring='f1',
n_jobs=-1)
clf_opt.fit(X_train, y_train)
y_train_pred = clf_opt.predict(X_train)
y_test_pred = clf_opt.predict(X_test)
# collect the bset estimator for each run
clfs_best.append(clf_opt.best_estimator_)
# calculate performance scores
train_scores.append(calc_scores(y_train, y_train_pred))
test_scores.append (calc_scores(y_test, y_test_pred))
print('Training set: F1 score %.3f | Accuracy %.3f | Recall %.3f | Precision %.3f '
% calc_scores(y_train, y_train_pred))
print('Training set: F1 score %.3f | Accuracy %.3f | Recall %.3f | Precision %.3f\n '
% calc_scores(y_test, y_test_pred))
print('===============================================================================')
train_scores = np.array(train_scores).reshape(num_runs, num_clfs, num_scores)
test_scores = np.array(test_scores ).reshape(num_runs, num_clfs, num_scores)
# time the script
end = time.time()
print('\nTime elapsed: %.3f mins' % ((end-start)/60))
# box plots of ['F1 score', 'Accuracy', 'Recall', 'Precision'] for both training and testing set
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style("whitegrid")
score_labels = ['F1 score', 'Accuracy', 'Recall', 'Precision']
clf_labels = [s.__class__.__name__ for s in clfs_set]
for idx_score, score_label in enumerate(score_labels):
plt.figure(figsize=[14, 4])
plt.subplot(1, 2, 1)
ax = sns.boxplot(data=train_scores [:,:,idx_score], palette="RdBu")
ax.set_ylim(0.5, 1.05)
ax.set_xticklabels(())
ax.set_title(score_label+' ( train )')
plt.xticks(np.arange(num_clfs), clf_labels, rotation='45')
plt.subplot(1, 2, 2)
ax = sns.boxplot(data=test_scores [:,:,idx_score], palette="RdBu")
ax.set_ylim(0.5, 1.05)
ax.set_xticklabels(())
ax.set_title(score_label+' ( test )')
plt.xticks(np.arange(num_clfs), clf_labels, rotation='45')
# print statistics
for idx_score, score_label in enumerate(score_labels):
print('=====================================================================')
print(score_label)
print('')
print('=== training set ===')
print(pd.DataFrame(train_scores[:, :, idx_score], columns=clf_labels).describe().T[['count', 'mean', 'std', 'min', 'max']])
print('')
print('=== testing set ===')
print(pd.DataFrame(test_scores [:, :, idx_score], columns=clf_labels).describe().T[['count', 'mean', 'std', 'min', 'max']])
print('=====================================================================')
print('Best F1 score:\n')
print('=== training set ===')
print(pd.DataFrame(train_scores[:, :, 0], columns=clf_labels).describe().T['max'])
print('')
print('=== testing set ===')
print(pd.DataFrame(test_scores [:, :, 0], columns=clf_labels).describe().T['max'])
Explanation: 5. Choosing the Best Model
Based on the experiments you performed earlier, in 1-2 paragraphs explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?
In 1-2 paragraphs explain to the board of supervisors in layman's terms how the final model chosen is supposed to work (for example if you chose a Decision Tree or Support Vector Machine, how does it make a prediction).
Fine-tune the model. Use Gridsearch with at least one important parameter tuned and with at least 3 settings. Use the entire training set for this.
What is the model's final F<sub>1</sub> score?
End of explanation
# Extract the best logistic regression model from clfs_best
# Since 25 independent runs generate similar optimal parameters for logistic regression,
# the first parameter set is selected.
lr_best = (np.array(clfs_best).reshape(num_runs, num_clfs))[:,3][0]
# fit the model with all the whole dataset
# le_y is the label encoder to transform "yes/no" to "1/0" for the target set
lr_best.fit(X_train_initial, le_y.transform(y_train_initial))
print("The final F1 socre using all data points as training set is %.3f. "
%f1_score(le_y.transform(y_test_initial), lr_best.predict(X_test_initial)))
Explanation: 6. Training logistic regression model with the whole training set
End of explanation |
8,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Word Sense Disambiguation
(C) 2017-2019 by Damir Cavar
Version
Step1: For a word that we want to disambiguate, we need to get all its synsets
Step2: For each synset we need to get its definition and the examples to use them as bags of words for a comparison
Step3: We will need to join a list of lists into one list, that is, we need to flatten a list of lists. To achive this, we can use the following code
Step4: What we should do is to tokenize and part-of-speech tag the text, that is the descriptions and the examples. We can use NLTK's word_tokenize and pos_tag modules
Step5: Now we can tokenize and PoS-tag the texts
Step6: The first step that we would take with a text that contains the word that we want to disambiguate is to find its position in the token list. | Python Code:
from nltk.corpus import wordnet
Explanation: Python Word Sense Disambiguation
(C) 2017-2019 by Damir Cavar
Version: 1.2, November 2019
License: Creative Commons Attribution-ShareAlike 4.0 International License (CA BY-SA 4.0)
This is a tutorial related to the discussion of a WordSense disambiguation and various machine learning strategies discussed in the textbook Machine Learning: The Art and Science of Algorithms that Make Sense of Data by Peter Flach.
This tutorial was developed as part of my course material for the courses Machine Learning and Advanced Natural Language Processing in the at Indiana University.
Word Sense Disambiguation
For a simple Bayesian implementation of a Word Sense Disambiguation algorithm we will use the WordNet NLTK module. We import it in the following way:
End of explanation
mySynsets = wordnet.synsets('bank')
print(mySynsets)
Explanation: For a word that we want to disambiguate, we need to get all its synsets:
End of explanation
for s in mySynsets:
print(s.name())
text = " ".join( [s.definition()] + s.examples() )
print(text, "\n", "-" * 20)
Explanation: For each synset we need to get its definition and the examples to use them as bags of words for a comparison:
End of explanation
import itertools
lOfl = [["this"], ["is","a"], ["test"]]
print(list(itertools.chain.from_iterable(lOfl)))
Explanation: We will need to join a list of lists into one list, that is, we need to flatten a list of lists. To achive this, we can use the following code:
End of explanation
from nltk import word_tokenize, pos_tag
Explanation: What we should do is to tokenize and part-of-speech tag the text, that is the descriptions and the examples. We can use NLTK's word_tokenize and pos_tag modules:
End of explanation
from nltk.corpus import stopwords
stopw = stopwords.words("english")
for s in mySynsets:
print(s.name())
text = pos_tag(word_tokenize(s.definition()))
text += list(itertools.chain.from_iterable([ pos_tag(word_tokenize(x)) for x in s.examples() ]))
text2 = [ x for x in text if x[0] not in stopw ]
print(text2, "\n", "-" * 20)
from nltk.stem import WordNetLemmatizer
wordnet_lemmatizer = WordNetLemmatizer()
wordnet_lemmatizer.lemmatize('dogs')
Explanation: Now we can tokenize and PoS-tag the texts:
End of explanation
example = "John saw the dogs barking at the cats."
keyword = "dog"
tokens = word_tokenize(example)
lemmas = [ wordnet_lemmatizer.lemmatize(x) for x in tokens ]
pos = -1
try:
pos = lemmas.index(keyword)
except ValueError:
pass
print("Position:", pos)
print(lemmas)
posTokens = pos_tag(tokens)
print("Lemma:", lemmas[pos])
print(" PoS:", posTokens[pos])
print(" Tag:", posTokens[pos][1])
print(" MTag:", posTokens[pos][1][0])
category = posTokens[pos][1][0]
print(category)
wType = None
if category == 'N':
wType = wordnet.NOUN
elif category == 'V':
wType = wordnet.VERB
elif category == 'J':
wType = wordnet.ADJ
elif category == 'R':
wType = wordnet.ADV
print("Type:", wType)
wordnet.synsets(keyword, pos=wType)
for s in wordnet.synsets(keyword, pos=wType):
print(s.name())
text = pos_tag(word_tokenize(s.definition()))
text += list(itertools.chain.from_iterable([ pos_tag(word_tokenize(x)) for x in s.examples() ]))
print(text, "\n", "-" * 20)
Explanation: The first step that we would take with a text that contains the word that we want to disambiguate is to find its position in the token list.
End of explanation |
8,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reinforcement Learning
This IPy notebook acts as supporting material for Chapter 21 Reinforcement Learning of the book Artificial Intelligence
Step1: CONTENTS
Overview
Passive Reinforcement Learning
Active Reinforcement Learning
OVERVIEW
Before we start playing with the actual implementations let us review a couple of things about RL.
Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.
Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
-- Source
Step2: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utility of states will be estimated. Let us import a GridMDP object from the mdp module. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9.
Step3: Figure 17.1 (sequential_decision_environment) is a GridMDP object and is similar to the grid shown in Figure 21.1. The rewards in the terminal states are +1 and -1 and -0.04 in rest of the states. <img src="files/images/mdp.png"> Now we define a policy similar to Fig 21.1 in the book.
Step4: Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837.
Step5: The rl module also has a simple implementation to simulate iterations. The function is called run_single_trial. Now we can try our implementation. We can also compare the utility estimates learned by our agent to those obtained via value iteration.
Step6: The values calculated by value iteration
Step7: Now the values estimated by our agent after 200 trials.
Step8: We can also explore how these estimates vary with time by using plots similar to Fig 21.5a. To do so we define a function to help us with the same. We will first enable matplotlib using the inline backend.
Step9: Here is a plot of state (2,2).
Step10: It is also possible to plot multiple states on the same plot.
Step11: ACTIVE REINFORCEMENT LEARNING
Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation.
QLearning Agent
The QLearningAgent class in the rl module implements the Agent Program described in Fig 21.8 of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples.
Step12: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.
Let us use the same GridMDP object we used above. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9. The class also implements an exploration function f which returns fixed Rplus untill agent has visited state, action Ne number of times. This is the same as the one defined on page 842 of the book. The method actions_in_state returns actions possible in given state. It is useful when applying max and argmax operations.
Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837. We use Rplus = 2 and Ne = 5 as defined on page 843. Fig 21.7
Step13: Now to try out the q_agent we make use of the run_single_trial function in rl.py (which was also used above). Let us use 200 iterations.
Step14: Now let us see the Q Values. The keys are state-action pairs. Where differnt actions correspond according to
Step15: The Utility U of each state is related to Q by the following equation.
U (s) = max <sub>a</sub> Q(s, a)
Let us convert the Q Values above into U estimates.
Step16: Let us finally compare these estimates to value_iteration results. | Python Code:
from rl import *
Explanation: Reinforcement Learning
This IPy notebook acts as supporting material for Chapter 21 Reinforcement Learning of the book Artificial Intelligence: A Modern Approach. This notebook makes use of the implementations in rl.py module. We also make use of implementation of MDPs in the mdp.py module to test our agents. It might be helpful if you have already gone through the IPy notebook dealing with Markov decision process. Let us import everything from the rl module. It might be helpful to view the source of some of our implementations. Please refer to the Introductory IPy file for more details.
End of explanation
%psource PassiveTDAgent
Explanation: CONTENTS
Overview
Passive Reinforcement Learning
Active Reinforcement Learning
OVERVIEW
Before we start playing with the actual implementations let us review a couple of things about RL.
Reinforcement Learning is concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward.
Reinforcement learning differs from standard supervised learning in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected. Further, there is a focus on on-line performance, which involves finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).
-- Source: Wikipedia
In summary we have a sequence of state action transitions with rewards associated with some states. Our goal is to find the optimal policy (pi) which tells us what action to take in each state.
PASSIVE REINFORCEMENT LEARNING
In passive Reinforcement Learning the agent follows a fixed policy and tries to learn the Reward function and the Transition model (if it is not aware of that).
Passive Temporal Difference Agent
The PassiveTDAgent class in the rl module implements the Agent Program (notice the usage of word Program) described in Fig 21.4 of the AIMA Book. PassiveTDAgent uses temporal differences to learn utility estimates. In simple terms we learn the difference between the states and backup the values to previous states while following a fixed policy. Let us look into the source before we see some usage examples.
End of explanation
from mdp import sequential_decision_environment
Explanation: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a policy(pi) and a mdp whose utility of states will be estimated. Let us import a GridMDP object from the mdp module. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9.
End of explanation
# Action Directions
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
policy = {
(0, 2): east, (1, 2): east, (2, 2): east, (3, 2): None,
(0, 1): north, (2, 1): north, (3, 1): None,
(0, 0): north, (1, 0): west, (2, 0): west, (3, 0): west,
}
Explanation: Figure 17.1 (sequential_decision_environment) is a GridMDP object and is similar to the grid shown in Figure 21.1. The rewards in the terminal states are +1 and -1 and -0.04 in rest of the states. <img src="files/images/mdp.png"> Now we define a policy similar to Fig 21.1 in the book.
End of explanation
our_agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
Explanation: Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837.
End of explanation
from mdp import value_iteration
Explanation: The rl module also has a simple implementation to simulate iterations. The function is called run_single_trial. Now we can try our implementation. We can also compare the utility estimates learned by our agent to those obtained via value iteration.
End of explanation
print(value_iteration(sequential_decision_environment))
Explanation: The values calculated by value iteration:
End of explanation
for i in range(200):
run_single_trial(our_agent,sequential_decision_environment)
print(our_agent.U)
Explanation: Now the values estimated by our agent after 200 trials.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def graph_utility_estimates(agent_program, mdp, no_of_iterations, states_to_graph):
graphs = {state:[] for state in states_to_graph}
for iteration in range(1,no_of_iterations+1):
run_single_trial(agent_program, mdp)
for state in states_to_graph:
graphs[state].append((iteration, agent_program.U[state]))
for state, value in graphs.items():
state_x, state_y = zip(*value)
plt.plot(state_x, state_y, label=str(state))
plt.ylim([0,1.2])
plt.legend(loc='lower right')
plt.xlabel('Iterations')
plt.ylabel('U')
Explanation: We can also explore how these estimates vary with time by using plots similar to Fig 21.5a. To do so we define a function to help us with the same. We will first enable matplotlib using the inline backend.
End of explanation
agent = PassiveTDAgent(policy, sequential_decision_environment, alpha=lambda n: 60./(59+n))
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2)])
Explanation: Here is a plot of state (2,2).
End of explanation
graph_utility_estimates(agent, sequential_decision_environment, 500, [(2,2), (3,2)])
Explanation: It is also possible to plot multiple states on the same plot.
End of explanation
%psource QLearningAgent
Explanation: ACTIVE REINFORCEMENT LEARNING
Unlike Passive Reinforcement Learning in Active Reinforcement Learning we are not bound by a policy pi and we need to select our actions. In other words the agent needs to learn an optimal policy. The fundamental tradeoff the agent needs to face is that of exploration vs. exploitation.
QLearning Agent
The QLearningAgent class in the rl module implements the Agent Program described in Fig 21.8 of the AIMA Book. In Q-Learning the agent learns an action-value function Q which gives the utility of taking a given action in a particular state. Q-Learning does not required a transition model and hence is a model free method. Let us look into the source before we see some usage examples.
End of explanation
q_agent = QLearningAgent(sequential_decision_environment, Ne=5, Rplus=2,
alpha=lambda n: 60./(59+n))
Explanation: The Agent Program can be obtained by creating the instance of the class by passing the appropriate parameters. Because of the call method the object that is created behaves like a callable and returns an appropriate action as most Agent Programs do. To instantiate the object we need a mdp similar to the PassiveTDAgent.
Let us use the same GridMDP object we used above. Figure 17.1 (sequential_decision_environment) is similar to Figure 21.1 but has some discounting as gamma = 0.9. The class also implements an exploration function f which returns fixed Rplus untill agent has visited state, action Ne number of times. This is the same as the one defined on page 842 of the book. The method actions_in_state returns actions possible in given state. It is useful when applying max and argmax operations.
Let us create our object now. We also use the same alpha as given in the footnote of the book on page 837. We use Rplus = 2 and Ne = 5 as defined on page 843. Fig 21.7
End of explanation
for i in range(200):
run_single_trial(q_agent,sequential_decision_environment)
Explanation: Now to try out the q_agent we make use of the run_single_trial function in rl.py (which was also used above). Let us use 200 iterations.
End of explanation
q_agent.Q
Explanation: Now let us see the Q Values. The keys are state-action pairs. Where differnt actions correspond according to:
north = (0, 1)
south = (0,-1)
west = (-1, 0)
east = (1, 0)
End of explanation
U = defaultdict(lambda: -1000.) # Very Large Negative Value for Comparison see below.
for state_action, value in q_agent.Q.items():
state, action = state_action
if U[state] < value:
U[state] = value
U
Explanation: The Utility U of each state is related to Q by the following equation.
U (s) = max <sub>a</sub> Q(s, a)
Let us convert the Q Values above into U estimates.
End of explanation
print(value_iteration(sequential_decision_environment))
Explanation: Let us finally compare these estimates to value_iteration results.
End of explanation |
8,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WikiNetworking Stallion Demo
Introduction
This notebook creates both interactive and high resolution graphs of social networks from Wikipedia articles. Several demonstration data sets are included.
Getting started
Run the cell below first. It will install the necessary packages and define a helper function and some variables for sample data URLs.
Step1: Creating a graph and a layout
The make_graph function loads a URL that contains our graph data and creates a networkx graph. You may optionally specify a minimum_weight for links between nodes to be registered on our graph. Once we have the graph, we also need to use a layout algorithm to generate the position of the nodes. Possible layouts include
Step2: Create a small, interactive graph
Now we can create a small graph using embedded HTML. You may optionally specify a matplotlib color map and a node_size_factor.
Step3: Save an extremely high resolution graph for a Massive Pixel Environment
This will take some time to run. You may specify your color maps, font sizes and node sizes here as well. Remember - what looks good on a small interactive screen may not work well on a display like TACC's Stallion | Python Code:
!pip install git+https://github.com/jchuahtacc/WikiNetworking.git
# Just in case we don't want to re-run the crawl, we will load the data directly
import wikinetworking as wn
import networkx as nx
import matplotlib.pyplot as plt
import urllib2
import json
%matplotlib inline
bet_hiphop_directed = "https://raw.githubusercontent.com/jchuahtacc/WikiNetworking/master/lessons/bet_directed.json"
bet_hiphop_undirected = "https://raw.githubusercontent.com/jchuahtacc/WikiNetworking/master/lessons/bet_undirected.json"
forbes_400 = "https://raw.githubusercontent.com/jchuahtacc/WikiNetworking/master/lessons/forbes400.json"
nba_allstars = "https://raw.githubusercontent.com/jchuahtacc/WikiNetworking/master/lessons/nba_allstars.json"
nfl_most_games = "https://raw.githubusercontent.com/jchuahtacc/WikiNetworking/master/lessons/nfl_players.json"
marvel_cinematic_universe = "https://raw.githubusercontent.com/jchuahtacc/WikiNetworking/master/lessons/mcu_network.json"
def make_graph(url, minimum_weight=2):
graph_data = json.loads(urllib2.urlopen(url).read())
return wn.create_graph(graph_data, minimum_weight=minimum_weight)
Explanation: WikiNetworking Stallion Demo
Introduction
This notebook creates both interactive and high resolution graphs of social networks from Wikipedia articles. Several demonstration data sets are included.
Getting started
Run the cell below first. It will install the necessary packages and define a helper function and some variables for sample data URLs.
End of explanation
# Make a graph object (optionally, specify minimum_weight)
graph = make_graph(marvel_cinematic_universe, minimum_weight=3)
# Generate a layout object
layout = nx.spring_layout(graph)
Explanation: Creating a graph and a layout
The make_graph function loads a URL that contains our graph data and creates a networkx graph. You may optionally specify a minimum_weight for links between nodes to be registered on our graph. Once we have the graph, we also need to use a layout algorithm to generate the position of the nodes. Possible layouts include:
circular_layout
random_layout
shell_layout
spring_layout
spectral_layout
fruchterman_reingold_layout
End of explanation
graph_html = wn.make_interactive_graph(graph, pos=layout, cmap=plt.cm.viridis, edge_cmap=plt.cm.Blues, node_size_factor=5)
Explanation: Create a small, interactive graph
Now we can create a small graph using embedded HTML. You may optionally specify a matplotlib color map and a node_size_factor.
End of explanation
wn.save_big_graph(graph,
pos=layout,
cmap=plt.cm.viridis,
edge_cmap=plt.cm.Blues,
width=3,
height=15,
dpi=1600,
font_size=1,
node_size_factor=5,
output_file="mcu_network.png")
print("OK")
Explanation: Save an extremely high resolution graph for a Massive Pixel Environment
This will take some time to run. You may specify your color maps, font sizes and node sizes here as well. Remember - what looks good on a small interactive screen may not work well on a display like TACC's Stallion
End of explanation |
8,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Music Recommender System using Apache Spark and Python
Estimated time
Step1: Loading data
Load the three datasets into RDDs and name them artistData, artistAlias, and userArtistData. View the README, or the files themselves, to see how this data is formated. Some of the files have tab delimeters while some have space delimiters. Make sure that your userArtistData RDD contains only the canonical artist IDs.
Step2: Data Exploration
In the blank below, write some code that with find the users' total play counts. Find the three users with the highest number of total play counts (sum of all counters) and print the user ID, the total play count, and the mean play count (average number of times a user played an artist). Your output should look as follows
Step3: Splitting Data for Testing
Use the randomSplit function to divide the data (userArtistData) into
Step4: The Recommender Model
For this project, we will train the model with implicit feedback. You can read more information about this from the collaborative filtering page
Step5: Model Construction
Now we can build the best model possibly using the validation set of data and the modelEval function. Although, there are a few parameters we could optimize, for the sake of time, we will just try a few different values for the rank parameter (leave everything else at its default value, except make seed=345). Loop through the values [2, 10, 20] and figure out which one produces the highest scored based on your model evaluation function.
Note
Step6: Now, using the bestModel, we will check the results over the test data. Your result should be ~0.0507.
Step7: Trying Some Artist Recommendations
Using the best model above, predict the top 5 artists for user 1059637 using the recommendProducts function. Map the results (integer IDs) into the real artist name using artistAlias. Print the results. The output should look as follows | Python Code:
from pyspark.mllib.recommendation import *
import random
from operator import *
Explanation: Music Recommender System using Apache Spark and Python
Estimated time: 8hrs
Description
For this project, you are to create a recommender system that will recommend new musical artists to a user based on their listening history. Suggesting different songs or musical artists to a user is important to many music streaming services, such as Pandora and Spotify. In addition, this type of recommender system could also be used as a means of suggesting TV shows or movies to a user (e.g., Netflix).
To create this system you will be using Spark and the collaborative filtering technique. The instructions for completing this project will be laid out entirely in this file. You will have to implement any missing code as well as answer any questions.
Submission Instructions:
* Add all of your updates to this IPython file and do not clear any of the output you get from running your code.
* Upload this file onto moodle.
Datasets
You will be using some publicly available song data from audioscrobbler, which can be found here. However, we modified the original data files so that the code will run in a reasonable time on a single machine. The reduced data files have been suffixed with _small.txt and contains only the information relevant to the top 50 most prolific users (highest artist play counts).
The original data file user_artist_data.txt contained about 141,000 unique users, and 1.6 million unique artists. About 24.2 million users’ plays of artists are recorded, along with their count.
Note that when plays are scribbled, the client application submits the name of the artist being played. This name could be misspelled or nonstandard, and this may only be detected later. For example, "The Smiths", "Smiths, The", and "the smiths" may appear as distinct artist IDs in the data set, even though they clearly refer to the same artist. So, the data set includes artist_alias.txt, which maps artist IDs that are known misspellings or variants to the canonical ID of that artist.
The artist_data.txt file then provides a map from the canonical artist ID to the name of the artist.
Necessary Package Imports
End of explanation
#Loading data into RDD
artistData = sc.textFile("artist_data_small.txt")
artistAlias = sc.textFile("artist_alias_small.txt")
userArtistData = sc.textFile("user_artist_data_small.txt")
alias_data = artistAlias.collect()
user_data = userArtistData.collect()
artist_canonical_dict = {}
user_list = []
for line in alias_data:
artist_record = line.split("\t")
artist_canonical_dict[artist_record[0]] = artist_record[1]
#Function to get canonical artist names
def canonicalArtistID(line):
line = line.split(" ")
if line[1] in artist_canonical_dict:
return (int(line[0]),int(artist_canonical_dict[line[1]]),int(line[2]))
else:
return (int(line[0]),int(line[1]),int(line[2]))
#Getting canonical artist names
userArtistData = userArtistData.map(canonicalArtistID)
#Creating allArtists dataset to be used later during model evaluation process
allArtists = userArtistData.map(lambda x:(x[1])).collect()
allArtists = list(set(allArtists))
Explanation: Loading data
Load the three datasets into RDDs and name them artistData, artistAlias, and userArtistData. View the README, or the files themselves, to see how this data is formated. Some of the files have tab delimeters while some have space delimiters. Make sure that your userArtistData RDD contains only the canonical artist IDs.
End of explanation
artist_data = artistAlias.collect()
user_play_count = {}
user_count_number = {}
for line in user_data:
user_record = line.split()
if user_record[0] in user_play_count:
user_play_count[str(user_record[0])] = user_play_count[user_record[0]] + int(user_record[2])
user_count_number[str(user_record[0])] = user_count_number[user_record[0]] + 1
else:
user_play_count[str(user_record[0])] = int(user_record[2])
user_count_number[str(user_record[0])] = 1
top = 0
maximum = 2
for word, count in sorted(user_play_count.iteritems(), key=lambda (k,v): (v,k), reverse = True):
if top > maximum:
break
print 'User ' + str(word) + ' has a total play count of ' + str(count) + ' and a mean play count of ' + str(count/user_count_number[word])
top += 1
Explanation: Data Exploration
In the blank below, write some code that with find the users' total play counts. Find the three users with the highest number of total play counts (sum of all counters) and print the user ID, the total play count, and the mean play count (average number of times a user played an artist). Your output should look as follows:
User 1059637 has a total play count of 674412 and a mean play count of 1878.
User 2064012 has a total play count of 548427 and a mean play count of 9455.
User 2069337 has a total play count of 393515 and a mean play count of 1519.
End of explanation
#Splitting the data into train, test and cross validation
trainData, validationData, testData = userArtistData.randomSplit([4, 4, 2], 13)
print trainData.take(3)
print validationData.take(3)
print testData.take(3)
print trainData.count()
print validationData.count()
print testData.count()
#Caching and creating ratings object
trainData = trainData.map(lambda l: Rating(*l)).cache()
validationData = validationData.map(lambda l: Rating(*l)).cache()
testData = testData.map(lambda l: Rating(*l)).cache()
Explanation: Splitting Data for Testing
Use the randomSplit function to divide the data (userArtistData) into:
* A training set, trainData, that will be used to train the model. This set should constitute 40% of the data.
* A validation set, validationData, used to perform parameter tuning. This set should constitute 40% of the data.
* A test set, testData, used for a final evaluation of the model. This set should constitute 20% of the data.
Use a random seed value of 13. Since these datasets will be repeatedly used you will probably want to persist them in memory using the cache function.
In addition, print out the first 3 elements of each set as well as their sizes; if you created these sets correctly, your output should look as follows:
[(1059637, 1000049, 1), (1059637, 1000056, 1), (1059637, 1000113, 5)]
[(1059637, 1000010, 238), (1059637, 1000062, 11), (1059637, 1000112, 423)]
[(1059637, 1000094, 1), (1059637, 1000130, 19129), (1059637, 1000139, 4)]
19817
19633
10031
End of explanation
from pyspark.mllib.recommendation import ALS, MatrixFactorizationModel, Rating
from collections import defaultdict
#model evaluation function
def modelEval(model, dataset):
global trainData
global allArtists
#Getting nonTrainArtists for each user
userArtists = defaultdict(list)
for data in trainData.collect():
userArtists[data[0]].append(data[1])
cvList = []
for key in userArtists.keys():
userArtists[key] = list(set(allArtists) - set(userArtists[key]))
for artist in userArtists[key]:
cvList.append((key, artist))
#Creating user,nonTrainArtists RDD
cvData = sc.parallelize(cvList)
userOriginal = dataset.map(lambda x:(x.user, (x.product, x.rating))).groupByKey().collect()
#prediction on the user, nonTrainArtists RDD
predictions = model.predictAll(cvData)
userPredictions = predictions.map(lambda x:(x.user, (x.product, x.rating))).groupByKey().collect()
original = {}
predictions = {}
#Getting top X artists for each user
for line in userOriginal:
original[line[0]] = sorted(line[1], key=lambda x:x[1], reverse = True)
for line in userPredictions:
predictions[line[0]] = sorted(line[1], key=lambda x:x[1], reverse = True)
similarity = []
for key in userOriginal:
similar = 0.0
pred = predictions[key[0]]
org = original[key[0]]
for value in org:
for item in pred[0:len(org)]:
if (value[0] == item[0]):
similar += 1
break
#Similarity calculation
similarity.append(float(similar/len(org)))
string = "The model score for rank " + str(rank) + " is " + str(float(sum(similarity)/len(similarity)))
print string
Explanation: The Recommender Model
For this project, we will train the model with implicit feedback. You can read more information about this from the collaborative filtering page: http://spark.apache.org/docs/latest/mllib-collaborative-filtering.html. The function you will be using has a few tunable parameters that will affect how the model is built. Therefore, to get the best model, we will do a small parameter sweep and choose the model that performs the best on the validation set
Therefore, we must first devise a way to evaluate models. Once we have a method for evaluation, we can run a parameter sweep, evaluate each combination of parameters on the validation data, and choose the optimal set of parameters. The parameters then can be used to make predictions on the test data.
Model Evaluation
Although there may be several ways to evaluate a model, we will use a simple method here. Suppose we have a model and some dataset of true artist plays for a set of users. This model can be used to predict the top X artist recommendations for a user and these recommendations can be compared the artists that the user actually listened to (here, X will be the number of artists in the dataset of true artist plays). Then, the fraction of overlap between the top X predictions of the model and the X artists that the user actually listened to can be calculated. This process can be repeated for all users and an average value returned.
For example, suppose a model predicted [1,2,4,8] as the top X=4 artists for a user. Suppose, that user actually listened to the artists [1,3,7,8]. Then, for this user, the model would have a score of 2/4=0.5. To get the overall score, this would be performed for all users, with the average returned.
NOTE: when using the model to predict the top-X artists for a user, do not include the artists listed with that user in the training data.
Name your function modelEval and have it take a model (the output of ALS.trainImplicit) and a dataset as input. For parameter tuning, the dataset parameter should be set to the validation data (validationData). After parameter tuning, the model can be evaluated on the test data (testData).
End of explanation
#Model evaluation through different rank parameters
rank_list = [2, 10, 20]
for rank in rank_list:
model = ALS.trainImplicit(trainData, rank, seed=345)
modelEval(model,validationData)
Explanation: Model Construction
Now we can build the best model possibly using the validation set of data and the modelEval function. Although, there are a few parameters we could optimize, for the sake of time, we will just try a few different values for the rank parameter (leave everything else at its default value, except make seed=345). Loop through the values [2, 10, 20] and figure out which one produces the highest scored based on your model evaluation function.
Note: this procedure may take several minutes to run.
For each rank value, print out the output of the modelEval function for that model. Your output should look as follows:
The model score for rank 2 is 0.090431
The model score for rank 10 is 0.095294
The model score for rank 20 is 0.090248
End of explanation
bestModel = ALS.trainImplicit(trainData, rank=10, seed=345)
modelEval(bestModel, testData)
Explanation: Now, using the bestModel, we will check the results over the test data. Your result should be ~0.0507.
End of explanation
ratings = bestModel.recommendProducts(1059637, 5)
import re
artist_data = artistData.collect()
artist_names_dict = {}
for line in artist_data:
pattern = re.match( r'(\d+)(\s+)(.*)', line)
artist_names_dict[str(pattern.group(1))] = pattern.group(3)
for i in range(0,5):
if str(ratings[i].product) in artist_canonical_dict:
artist_id = artist_canonical_dict[str(ratings[i].product)]
print "Artist " + str(i) + ": " + str(artist_names_dict[str(artist_id)])
else:
print "Artist " + str(i) + ": " + str(artist_names_dict[str(ratings[i].product)])
Explanation: Trying Some Artist Recommendations
Using the best model above, predict the top 5 artists for user 1059637 using the recommendProducts function. Map the results (integer IDs) into the real artist name using artistAlias. Print the results. The output should look as follows:
Artist 0: Brand New
Artist 1: Taking Back Sunday
Artist 2: Evanescence
Artist 3: Elliott Smith
Artist 4: blink-182
End of explanation |
8,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Neural Networks
Step2: 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step16: Expected Output | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Convolutional Neural Networks: Step by Step
Welcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation.
Notation:
- Superscript $[l]$ denotes an object of the $l^{th}$ layer.
- Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.
Superscript $(i)$ denotes an object from the $i^{th}$ example.
Example: $x^{(i)}$ is the $i^{th}$ training example input.
Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer.
$n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$.
$n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$.
We assume that you are already familiar with numpy and/or have completed the previous courses of the specialization. Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
End of explanation
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant')
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
Explanation: 2 - Outline of the Assignment
You will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:
Convolution functions, including:
Zero Padding
Convolve window
Convolution forward
Convolution backward (optional)
Pooling functions, including:
Pooling forward
Create mask
Distribute value
Pooling backward (optional)
This notebook will ask you to implement these functions from scratch in numpy. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:
<img src="images/model.png" style="width:800px;height:300px;">
Note that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation.
3 - Convolutional Neural Networks
Although programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below.
<img src="images/conv_nn.png" style="width:350px;height:200px;">
In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself.
3.1 - Zero-Padding
Zero-padding adds zeros around the border of an image:
<img src="images/PAD.png" style="width:600px;height:400px;">
<caption><center> <u> <font color='purple'> Figure 1 </u><font color='purple'> : Zero-Padding<br> Image (3 channels, RGB) with a padding of 2. </center></caption>
The main benefits of padding are the following:
It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer.
It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.
Exercise: Implement the following function, which pads all the images of a batch of examples X with zeros. Use np.pad. Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with pad = 1 for the 2nd dimension, pad = 3 for the 4th dimension and pad = 0 for the rest, you would do:
python
a = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))
End of explanation
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Add bias.
s = a_slice_prev * W + b
# Sum over all entries of the volume s
Z = s.sum()
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
Explanation: Expected Output:
<table>
<tr>
<td>
**x.shape**:
</td>
<td>
(4, 3, 3, 2)
</td>
</tr>
<tr>
<td>
**x_pad.shape**:
</td>
<td>
(4, 7, 7, 2)
</td>
</tr>
<tr>
<td>
**x[1,1]**:
</td>
<td>
[[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
</td>
</tr>
<tr>
<td>
**x_pad[1,1]**:
</td>
<td>
[[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
</td>
</tr>
</table>
3.2 - Single step of convolution
In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which:
Takes an input volume
Applies a filter at every position of the input
Outputs another volume (usually of different size)
<img src="images/Convolution_schematic.gif" style="width:500px;height:300px;">
<caption><center> <u> <font color='purple'> Figure 2 </u><font color='purple'> : Convolution operation<br> with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) </center></caption>
In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output.
Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation.
Exercise: Implement conv_single_step(). Hint.
End of explanation
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad) / stride + 1)
n_W = int((n_W_prev - f + 2 * pad) / stride + 1)
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h
vert_end = h + f
horiz_start = w
horiz_end = w + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = A_prev_pad[i, vert_start:vert_end, horiz_start:horiz_end]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:, :, :, c], b[:, :, :, c])
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 1}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
Explanation: Expected Output:
<table>
<tr>
<td>
**Z**
</td>
<td>
-23.1602122025
</td>
</tr>
</table>
3.3 - Convolutional Neural Networks - Forward pass
In the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume:
<center>
<video width="620" height="440" src="images/conv_kiank.mp4" type="video/mp4" controls>
</video>
</center>
Exercise: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding.
Hint:
1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:
python
a_slice_prev = a_prev[0:2,0:2,:]
This will be useful when you will define a_slice_prev below, using the start/end indexes you will define.
2. To define a_slice you will need to first define its corners vert_start, vert_end, horiz_start and horiz_end. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below.
<img src="images/vert_horiz_kiank.png" style="width:400px;height:300px;">
<caption><center> <u> <font color='purple'> Figure 3 </u><font color='purple'> : Definition of a slice using vertical and horizontal start/end (with a 2x2 filter) <br> This figure shows only a single channel. </center></caption>
Reminder:
The formulas relating the output shape of the convolution to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$
$$ n_C = \text{number of filters used in the convolution}$$
For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
End of explanation
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h
vert_end = h + f
horiz_start = w
horiz_end = w + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 1, "f": 4}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
Explanation: Expected Output:
<table>
<tr>
<td>
**Z's mean**
</td>
<td>
0.155859324889
</td>
</tr>
<tr>
<td>
**cache_conv[0][1][2][3]**
</td>
<td>
[-0.20075807 0.18656139 0.41005165]
</td>
</tr>
</table>
Finally, CONV layer should also contain an activation, in which case we would add the following line of code:
```python
Convolve the window to get back one output neuron
Z[i, h, w, c] = ...
Apply activation
A[i, h, w, c] = activation(Z[i, h, w, c])
```
You don't need to do it here.
4 - Pooling layer
The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are:
Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.
Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.
<table>
<td>
<img src="images/max_pool1.png" style="width:500px;height:300px;">
<td>
<td>
<img src="images/a_pool.png" style="width:500px;height:300px;">
<td>
</table>
These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over.
4.1 - Forward Pooling
Now, you are going to implement MAX-POOL and AVG-POOL, in the same function.
Exercise: Implement the forward pass of the pooling layer. Follow the hints in the comments below.
Reminder:
As there's no padding, the formulas binding the output shape of the pooling to the input shape is:
$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$
$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$
$$ n_C = n_{C_{prev}}$$
End of explanation
def conv_backward(dZ, cache):
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = None
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = None
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = None
# Retrieve information from "hparameters"
stride = None
pad = None
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = None
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = None
dW = None
db = None
# Pad A_prev and dA_prev
A_prev_pad = None
dA_prev_pad = None
for i in range(None): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = None
da_prev_pad = None
for h in range(None): # loop over vertical axis of the output volume
for w in range(None): # loop over horizontal axis of the output volume
for c in range(None): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Use the corners to define the slice from a_prev_pad
a_slice = None
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None
dW[:,:,:,c] += None
db[:,:,:,c] += None
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = None
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
Explanation: Expected Output:
<table>
<tr>
<td>
A =
</td>
<td>
[[[[ 1.74481176 1.6924546 2.10025514]]] <br/>
[[[ 1.19891788 1.51981682 2.18557541]]]]
</td>
</tr>
<tr>
<td>
A =
</td>
<td>
[[[[-0.09498456 0.11180064 -0.14263511]]] <br/>
[[[-0.09525108 0.28325018 0.33035185]]]]
</td>
</tr>
</table>
Congratulations! You have now implemented the forward passes of all the layers of a convolutional network.
The remainer of this notebook is optional, and will not be graded.
5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)
In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like.
When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below.
5.1 - Convolutional layer backward pass
Let's start by implementing the backward pass for a CONV layer.
5.1.1 - Computing dA:
This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:
$$ dA += \sum {h=0} ^{n_H} \sum{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$
Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices.
In code, inside the appropriate for-loops, this formula translates into:
python
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
5.1.2 - Computing dW:
This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:
$$ dW_c += \sum {h=0} ^{n_H} \sum{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$
Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$.
In code, inside the appropriate for-loops, this formula translates into:
python
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
5.1.3 - Computing db:
This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:
$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$
As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost.
In code, inside the appropriate for-loops, this formula translates into:
python
db[:,:,:,c] += dZ[i, h, w, c]
Exercise: Implement the conv_backward function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
End of explanation
def create_mask_from_window(x):
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
### START CODE HERE ### (≈1 line)
mask = None
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
Explanation: Expected Output:
<table>
<tr>
<td>
**dA_mean**
</td>
<td>
9.60899067587
</td>
</tr>
<tr>
<td>
**dW_mean**
</td>
<td>
10.5817412755
</td>
</tr>
<tr>
<td>
**db_mean**
</td>
<td>
76.3710691956
</td>
</tr>
</table>
5.2 Pooling layer - backward pass
Next, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer.
5.2.1 Max pooling - backward pass
Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called create_mask_from_window() which does the following:
$$ X = \begin{bmatrix}
1 && 3 \
4 && 2
\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}
0 && 0 \
1 && 0
\end{bmatrix}\tag{4}$$
As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask.
Exercise: Implement create_mask_from_window(). This function will be helpful for pooling backward.
Hints:
- np.max() may be helpful. It computes the maximum of an array.
- If you have a matrix X and a scalar x: A = (X == x) will return a matrix A of the same size as X such that:
A[i,j] = True if X[i,j] = x
A[i,j] = False if X[i,j] != x
- Here, you don't need to consider cases where there are several maxima in a matrix.
End of explanation
def distribute_value(dz, shape):
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = None
# Compute the value to distribute on the matrix (≈1 line)
average = None
# Create a matrix where every entry is the "average" value (≈1 line)
a = None
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
Explanation: Expected Output:
<table>
<tr>
<td>
**x =**
</td>
<td>
[[ 1.62434536 -0.61175641 -0.52817175] <br>
[-1.07296862 0.86540763 -2.3015387 ]]
</td>
</tr>
<tr>
<td>
**mask =**
</td>
<td>
[[ True False False] <br>
[False False False]]
</td>
</tr>
</table>
Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost.
5.2.2 - Average pooling - backward pass
In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.
For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like:
$$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}
1/4 && 1/4 \
1/4 && 1/4
\end{bmatrix}\tag{5}$$
This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average.
Exercise: Implement the function below to equally distribute a value dz through a matrix of dimension shape. Hint
End of explanation
def pool_backward(dA, cache, mode = "max"):
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = None
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = None
f = None
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = None
m, n_H, n_W, n_C = None
# Initialize dA_prev with zeros (≈1 line)
dA_prev = None
for i in range(None): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = None
for h in range(None): # loop on the vertical axis
for w in range(None): # loop on the horizontal axis
for c in range(None): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = None
# Create the mask from a_prev_slice (≈1 line)
mask = None
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
elif mode == "average":
# Get the value a from dA (≈1 line)
da = None
# Define the shape of the filter as fxf (≈1 line)
shape = None
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
Explanation: Expected Output:
<table>
<tr>
<td>
distributed_value =
</td>
<td>
[[ 0.5 0.5]
<br\>
[ 0.5 0.5]]
</td>
</tr>
</table>
5.2.3 Putting it together: Pooling backward
You now have everything you need to compute backward propagation on a pooling layer.
Exercise: Implement the pool_backward function in both modes ("max" and "average"). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an if/elif statement to see if the mode is equal to 'max' or 'average'. If it is equal to 'average' you should use the distribute_value() function you implemented above to create a matrix of the same shape as a_slice. Otherwise, the mode is equal to 'max', and you will create a mask with create_mask_from_window() and multiply it by the corresponding value of dZ.
End of explanation |
8,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Algoritmos y Funciones
Objetivos
Conocer los conceptos de algoritmo, código y pseudo-código.
Conectar los conceptos anteriores con la generación de algoritmos y funciones en Python.
Motivación
Imagine que Ud. es un trabaja en una compañía de seguros para la cual es necesario evaluar constantemente el nivel de riesgo de un cliente en base a sus antecedentes antes de negociar un producto. ¿Será posible automatizar el proceso con el fin de trabajar menos, mejorar los tiempos de evaluación y hacer el proceso más eficiente?
La respuesta a esta y muchas otras preguntas se encuentra en la creación de algoritmos de programación.
0.1 Instrucciones
Instrucciones de instalación y utilización de un ipython notebook.
Recuerden
Step2: 1. Definiciones y conceptos básicos.
Entenderemos por algoritmo a una serie de pasos que persiguen un objetivo específico. Intuitivamente lo podemos relacionar con una receta de cocina
Step3: 2.2 Segundo Programa
Al utilizar números grandes ($N=10^7$, por ejemplo) nos damos cuenta que el algoritmo anterior tarda mucho tiempo en ejecutar, y que recorre todos los numeros. Sin embargo, si se encuentra un divisor ya sabemos que el número no es primo, pudiendo detener inmediatamente el algoritmo. Esto se consigue utilizando únicamente una línea extra, con una sentencia break.
El algoritmo para verificar si un numero no primo es
Step4: La ejecución de números grandes compuestos se detiene en el primer múltiplo cuando el número es compuesto. Sin embargo, para numeros grandes y primos tarda bastante.
2.3 Tercer Programa
Un último truco que podemos utilizar para verificar más rápidamente si un número es primo es verificar únicamente parte del rango de los múltiplos. Esto se explica mejor con un ejemplo. Consideremos el número 16
Step5: 3. Midiendo la complejidad
Como dijimos anteriormente luego de hacer que un algoritmo funcione, una de las preguntas más importantes es la revisión de éste haciendo énfasis en la medición del tiempo que necesita para resolver el problema. Así, la primera interrogante es
Step6: La función sin_inputs_ni_outputs se ejecuta sin recibir datos de entrada ni producir datos de salida (Y no es muy útil).
Step7: La función sin_inputs se ejecuta sin recibir datos de entrada pero si produce datos de salida.
Step8: La función con_input_y_output se ejecuta con datos de entrada y produce datos de salida. Cabe destacar que como python no utiliza tipos de datos, la misma función puede aplicarse a distintos tipos de datos mientras la lógica aplicada dentro de la función tenga sentido (y no arroje errores).
Step9: La función con_tuti se ejecuta con datos de entrada y valores por defecto, y produce datos de salida. | Python Code:
IPython Notebook v4.0 para python 3.0
Librerías adicionales:
Contenido bajo licencia CC-BY 4.0. Código bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
# Configuración para recargar módulos y librerías dinámicamente
%reload_ext autoreload
%autoreload 2
# Configuración para graficos en línea
%matplotlib inline
# Configuración de estilo
from IPython.core.display import HTML
HTML(open("./style/style.css", "r").read())
Explanation: <img src="images/utfsm.png" alt="" width="200px" align="right"/>
USM Numérica
Algoritmos y Funciones
Objetivos
Conocer los conceptos de algoritmo, código y pseudo-código.
Conectar los conceptos anteriores con la generación de algoritmos y funciones en Python.
Motivación
Imagine que Ud. es un trabaja en una compañía de seguros para la cual es necesario evaluar constantemente el nivel de riesgo de un cliente en base a sus antecedentes antes de negociar un producto. ¿Será posible automatizar el proceso con el fin de trabajar menos, mejorar los tiempos de evaluación y hacer el proceso más eficiente?
La respuesta a esta y muchas otras preguntas se encuentra en la creación de algoritmos de programación.
0.1 Instrucciones
Instrucciones de instalación y utilización de un ipython notebook.
Recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con Ctr-S para evitar sorpresas.
* Reemplazar en las celdas de código donde diga FIX_ME por el código correspondiente.
* Ejecutar cada celda de código utilizando Ctr-Enter
0.2 Licenciamiento y Configuración
Ejecutar la siguiente celda mediante Ctr-Enter.
End of explanation
N = int(raw_input("Ingrese el numero que desea estudiar "))
if N<=1:
print("Numero N tiene que ser mayor o igual a 2")
elif 2<=N<=3:
print("{0} es primo".format(N))
else:
es_primo = True
for i in range(2, N):
if N%i==0:
es_primo = False
if es_primo:
print("{0} es primo".format(N))
else:
print("{0} es compuesto".format(N))
Explanation: 1. Definiciones y conceptos básicos.
Entenderemos por algoritmo a una serie de pasos que persiguen un objetivo específico. Intuitivamente lo podemos relacionar con una receta de cocina: una serie de pasos bien definidos (sin dejar espacio para la confusión del usuario) que deben ser realizados en un orden específico para obtener un determinado resultado.
En general un buen algoritmo debe poseer las siguientes características:
No debe ser ambiguo en su implementación para cualquier usuario.
Debe definir adecuadamente datos de entrada (inputs).
Debe producir datos de salida (outputs) específicos.
Debe poder realizarse en un número finito de pasos y por ende, en un tiempo finito. ( Ver The Halting Problem ).
Por otra parte, llamaremos código a la materialización, en base a la implementación en la sintaxis adecuada de un determinado lenguaje de programación, de un determinado algoritmo. Entonces, para escribir un buen código y que sea eficiente debe tratar de respetar las ideas anteriores: se debe desarrollar en un número finito de pasos, ocupar adecuadamente las estructuras propias del lenguaje, se debe poder ingresar y manipular adecuadamente los datos de entrada y finalmente entregar el resultado deseado.
A diferencia de lo anterior, una idea un poco menos estructurada es el concepto de pseudo-código. Entenderemos por el anterior a la descripción informal de un determinado algoritmo en un determinado lenguaje de programación. Sin embargo, no debe perder las características esenciales de un algoritmo como claridad en los pasos, inputs y outputs bien definidos, etc. de tal forma que permita la implementación directa de éste en el computador.
Una vez implementado un algoritmo viene el proceso de revisión de éste. Para realizar adecuadamente lo anterior se recomienda contestar las siguentes preguntas:
1. ¿Mi algoritmo funciona para todos los posibles datos de entrada?
2. ¿Cuánto tiempo tomará en correr mi algoritmo? ¿Cuánta memoria ocupa en mi computador?
3. Ya que sé que mi algoritmo funciona: ¿es posible mejorarlo? ¿Puedo hacer que resuelva mi problema más rápido?
2. Un ejemplo sencillo: Programa para números primos.
A continuación estudiamos el problema de determinar si un número entero $N\geq 2$ es primo o no.
Consideremos los siguientes números: 8191 (primo), 8192 (compuesto), 49979687 (primo), 49979689 (compuesto).
2.1 Primer programa
Nuestro primer algoritmo para determinar si un numero es primo es: verificar que ningún número entre $2$ y $N-1$ sea divisor de $N$.
El pseudo-código es:
1. Ingresar un determinado número $N$ mayor a 1.
2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división entre $2$ y $N-1$. Si ningún resto es cero, entonces el numero $N$ es primo. En otro caso, el número no es primo.
El código es el siguiente:
End of explanation
N = int(raw_input("Ingrese el numero que desea estudiar "))
if N<=1:
print("Numero N tiene que ser mayor o igual a 2")
elif 2<=N<=3:
print("{0} es primo".format(N))
else:
es_primo = True
for i in range(2, N):
if N%i==0:
es_primo = False
break
if es_primo:
print("{0} es primo".format(N))
else:
print("{0} es compuesto".format(N))
Explanation: 2.2 Segundo Programa
Al utilizar números grandes ($N=10^7$, por ejemplo) nos damos cuenta que el algoritmo anterior tarda mucho tiempo en ejecutar, y que recorre todos los numeros. Sin embargo, si se encuentra un divisor ya sabemos que el número no es primo, pudiendo detener inmediatamente el algoritmo. Esto se consigue utilizando únicamente una línea extra, con una sentencia break.
El algoritmo para verificar si un numero no primo es: verificar si algún numero entre $2$ y $N-1$ es divisor de $N$.
El pseudo-código es:
1. Ingresar un determinado número $N$ mayor a 1.
2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división entre $2$ y $N-1$. Si alguno de los restos es cero, entonces el numero $N$ es divisible, y no es primo.
Mientras que el código es el siguiente:
End of explanation
N = int(raw_input("Ingrese el numero que desea estudiar "))
if N<=1:
print("Numero N tiene que ser mayor o igual a 2")
elif 2<=N<=3:
print("{0} es primo".format(N))
else:
es_primo = True
for i in range(2, int(N**.5)):
if N%i==0:
es_primo = False
break
if es_primo:
print("{0} es primo".format(N))
else:
print("{0} no es primo".format(N))
Explanation: La ejecución de números grandes compuestos se detiene en el primer múltiplo cuando el número es compuesto. Sin embargo, para numeros grandes y primos tarda bastante.
2.3 Tercer Programa
Un último truco que podemos utilizar para verificar más rápidamente si un número es primo es verificar únicamente parte del rango de los múltiplos. Esto se explica mejor con un ejemplo. Consideremos el número 16: los multiplos son 2, 4 y 8. Como el número es compuesto, nuestro algoritmo anterior detecta rápidamente que 2 es un factor, detiene el algoritmo e indica que el número 12 no es primo. Consideremos ahora el numero 17: es un número primo y no tiene factores, por lo que el algoritmo revisa los numeros 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 y 16. Sin embargo, sólo es necesario revisar los números 2, 3, 4, 5 y 6 porque para que exista un factor mayor a 6, tiene que simultáneamente haber un factor menor a 6 tal que la multiplicación sea 17. Esto es, basta revisar los factores más pequeños, donde la cota está dada por el entero más cercano a $\sqrt{17}$ o en general, $\sqrt{N}$.
El algoritmo para verificar si un numero no primo es: verificar si algún numero entero entre $2$ y $\sqrt{N}$ es divisor de $N$.
El pseudo-código es:
1. Ingresar un determinado número $N$ mayor a 1.
2. Si el número es 2 o 3, el numero es primo. Sino, se analizan los restos de la división para cada número entre $2$ y $\sqrt{N-1}$. Si alguno de los restos es cero, entonces el numero $N$ es divisible, y no es primo.
Mientras que el código es el siguiente:
End of explanation
def sin_inputs_ni_outputs():
print "Hola mundo"
def sin_inputs():
return "42"
def sin_outputs(a,b):
print a
print b
def con_input_y_output(a,b):
return a+b
def con_tuti(a,b,c=2):
return a+b*c
Explanation: 3. Midiendo la complejidad
Como dijimos anteriormente luego de hacer que un algoritmo funcione, una de las preguntas más importantes es la revisión de éste haciendo énfasis en la medición del tiempo que necesita para resolver el problema. Así, la primera interrogante es: ¿cómo podemos medir el tiempo que tarda un algoritmo en relación al tamaño del problema que resuelve? Esto se denomina usualmente como complejidad temporal o, en inglés, como time complexity o escalability.
Sin embargo es importante notar que medir la complejidad temporal de un algoritmo puede resultar un poco complejo puesto que: (a) El tiempo que toma al computador realizar las distintas operaciones en general es heterogeneo, es decir, realizar una suma es mucho más rápido que hacer una división, (b) Computadores distintos puede realizar un determinado experimento en tiempos distintos.
La notación estándar para la complejidad de un algoritmo es mediante la letra mayúscula O, por lo que la complejidad de alguna función la podemos expresar por O("función"), lo que podemos interpretar como que el número de operaciones es proporcional a la función por una determinada constante. Las complejidades más importantes son:
O(1) es un algoritmo de complejidad temporal constante, es decir, el número de operaciones del algoritmo realmente no varía mucho si el tamaño del problema crece.
O(log(n)) es la complejidad logarítmica.
O(n) significa que la complejidad del problema es lineal, es decir, doblar el tamaño del problema dobla el tamaño requerido para su solución.
O($n^2$) significa complejidad cuadrática, es decir, doblar el tamaño del problema cuatriplica el tiempo requerido para su solución.
O($2^n$) y en general O($a^n$), $a>1$, posee complejidad exponencial.
En nuestros algoritmos anteriormente desarrollados:
1. El primer algoritmo tiene complejidad $O(N)$: siempre tarda lo mismo.
2. El segundo algoritmo tiene complejidad variable: si el numero es compuesto tarda en el mejor de los casos O($1$) y O($\sqrt{N}$) en el peor de los casos (como 25, o cualquier numero primo al cuadrado), pero si es primo tarda O($N$), pues verifica todos los posibles múltiplos.
2. El segundo algoritmo tiene complejidad variable: si el numero es compuesto tarda en ambos casos a lo más O($\sqrt{N}$), pues verifica solo los multiplos menores.
Desafío
A
B
C
Funciones
Cuando un algoritmo se utiliza muy seguido, resulta conveniente encapsular su utilización en una función. Ahora bien, resulta importante destacar que en informática una función no tiene el mismo significado que en matemáticas. Una función (en python) es simplemente una sucesión de acciones que se ejecutan sobre un conjunto de variables de entrada para producir un conjunto de variables de salida.
La definición de funciones se realiza de la siguiente forma:
def nombre_de_funcion(variable_1, variable_2, variable_opcional_1=valor_por_defecto_1, ...):
accion_1
accion_2
return valor_1, valor_2
A continuación algunos ejemplos.
End of explanation
sin_inputs_ni_outputs()
Explanation: La función sin_inputs_ni_outputs se ejecuta sin recibir datos de entrada ni producir datos de salida (Y no es muy útil).
End of explanation
x = sin_inputs()
print("El sentido de la vida, el universo y todo lo demás es: "+x)
Explanation: La función sin_inputs se ejecuta sin recibir datos de entrada pero si produce datos de salida.
End of explanation
print con_input_y_output("uno","dos")
print con_input_y_output(1,2)
print con_input_y_output(1.0, 2)
print con_input_y_output(1.0, 2.0)
Explanation: La función con_input_y_output se ejecuta con datos de entrada y produce datos de salida. Cabe destacar que como python no utiliza tipos de datos, la misma función puede aplicarse a distintos tipos de datos mientras la lógica aplicada dentro de la función tenga sentido (y no arroje errores).
End of explanation
print con_tuti(1,2)
print con_tuti("uno","dos")
print con_tuti(1,2,c=3)
print con_tuti(1,2,3)
print con_tuti("uno","dos",3)
Explanation: La función con_tuti se ejecuta con datos de entrada y valores por defecto, y produce datos de salida.
End of explanation |
8,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What's New in Marvin 2.1
Marvin is Python 3.5+ compliant!
Step1: Web
Interactive NASA-Sloan Atlas (NSA) Parameter Visualization
http
Step2: Map Plotting
Completely redesigned map plotting
uses DAP bitmasks (NOVALUE, BADVALUE, MATHERROR, BADFIT, and DONOTUSE) and masks spaxels with ivar = 0
uses hatching for regions with data (i.e., a spectrum) but no measurement by the DAP
clips at 5th and 95th percentiles (10th and 90th percentiles for velocity and sigma plots)
velocity plots are symmetric about 0
minimum SNR is 1
Step3: BPT Diagrams
Classify spaxels in a given Maps object according to BPT diagrams! Will return spaxel classifications for star-forming, composite, seyfert, liner, and ambiguous. Note
Step4: the BPT uses a default minimum SNR threshold cutoff of 3 on each emission line. You can change this globally using the snr keyword. Note
Step5: or you change it for individual emission lines. It will use the default value of 3 for all lines you do not specify. | Python Code:
import matplotlib
%matplotlib inline
# only necessary if you have a local DB
from marvin import config
config.forceDbOff()
Explanation: What's New in Marvin 2.1
Marvin is Python 3.5+ compliant!
End of explanation
from marvin.tools.cube import Cube
cube = Cube(plateifu='7957-12702')
print(cube)
list(cube.nsa.keys())
# get the mass of the galaxy
cube.nsa.elpetro_logmass
Explanation: Web
Interactive NASA-Sloan Atlas (NSA) Parameter Visualization
http://www.sdss.org/dr13/manga/manga-target-selection/nsa/
- Drag-and-drop parameter names from table onto axis name to change quantity on axis
- Click Box-and-whisker button and scroll horizontally to see distributions of selected parameters.
- Click on arrow in upper right corner of table to show all parameters.
Python snippets (Cube, Spectrum, Map, Query)
Tools
NASA-Sloan Atlas (NSA) Parameters
Cube.nsa or Maps.nsa
End of explanation
from marvin.tools.maps import Maps
maps = Maps(plateifu='7957-12702')
print(maps)
haflux = maps['emline_gflux_ha_6564']
print(haflux)
fig, ax = haflux.plot()
stvel = maps['stellar_vel']
fig, ax = stvel.plot()
stsig = maps['stellar_sigma']
fig, ax = stsig.plot()
Explanation: Map Plotting
Completely redesigned map plotting
uses DAP bitmasks (NOVALUE, BADVALUE, MATHERROR, BADFIT, and DONOTUSE) and masks spaxels with ivar = 0
uses hatching for regions with data (i.e., a spectrum) but no measurement by the DAP
clips at 5th and 95th percentiles (10th and 90th percentiles for velocity and sigma plots)
velocity plots are symmetric about 0
minimum SNR is 1
End of explanation
masks, fig = maps.get_bpt()
# this is the global mask for star-forming spaxels. It can be used to do selections on any other map property.
masks['sf']['global']
# let's look at the h-alpha flux values for the star-forming spaxels
haflux.value[masks['sf']['global']]
# let's get the stellar velocity values for the star-forming spaxels
stvel.value[masks['sf']['global']]
# the BPT uses a strict classification scheme based on the BPTs for NII, SII, and OI. If you do not want to use OI,
# you can turn it off
mask, fig = maps.get_bpt(use_oi=False)
Explanation: BPT Diagrams
Classify spaxels in a given Maps object according to BPT diagrams! Will return spaxel classifications for star-forming, composite, seyfert, liner, and ambiguous. Note: there is currently a bug in the BPT code that returns incorrect composite spaxels. This is fixed in a 2.1.2 patch that will be released soon.
End of explanation
masks, fig = maps.get_bpt(snr=5)
Explanation: the BPT uses a default minimum SNR threshold cutoff of 3 on each emission line. You can change this globally using the snr keyword. Note: this keyword will change to snrmin in the upcoming 2.12 patch.
End of explanation
masks, fig = maps.get_bpt(snr={'ha':5, 'sii':1})
Explanation: or you change it for individual emission lines. It will use the default value of 3 for all lines you do not specify.
End of explanation |
8,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probabilities are a way of quantifying the possibility of the occurrence of a specific event or events given the set of all possible events.
Notationally, $P(E)$ means "the probability of event $E$."
Dependence and Independence
Events $E$ and $F$ are dependent if information about $E$ gives us information about the probability of $F$ occurring (or vice versa). If this is not the case, the variables are independent of each other.
For independent events, the probability of both occurring is the product of the probabilities of each occurring
Step2: The cumulative distribution function (cdf) gives the probability that a random variable is less than or equal to a certain value.
Step3: The Normal Distribution
The normal distribution is the definitive example of a random distribution (the classic bell curve shape). It is defined by two parameters
Step5: When $\mu = 0$ and $\sigma = 1$ we call a distribution the standard normal distribution.
Step6: The Central Limit Theorem
The central limit theorem states that the average of a large number of independent and identically distributed random variables is itself normally distributed.
So, if $x_1, ..., x_n$ are random and they have mean $\mu$ and standard deviation $\sigma$, then $\frac{1}{n}(x_1 +\ ...\ + x_n)$ will be normally distributed. An equivalent expression is $\frac{(x_1 +\ ...\ + x_n)\ -\ \mu n}{\sigma \sqrt{n}}$
A binomial random variable (Binomial(n, p)) is the sum of $n$ independent Bernoulli (Bernoulli(p)) random variables. Each of the variables equals 1 with a probability of $p$ and equals 0 with a probability of $1 - p$.
Step7: The mean of a Bernoulli(p) variable is $p$ and its standard deviation is $\sqrt{p(1 - p)}$. | Python Code:
def uniform_pdf(x):
return 1 if x >= 0 and x < 1 else 0
xs = np.arange(-1, 2, .001)
ys = [uniform_pdf(x) for x in xs]
plt.plot(xs, ys);
uniform_pdf(-0.01)
Explanation: Probabilities are a way of quantifying the possibility of the occurrence of a specific event or events given the set of all possible events.
Notationally, $P(E)$ means "the probability of event $E$."
Dependence and Independence
Events $E$ and $F$ are dependent if information about $E$ gives us information about the probability of $F$ occurring (or vice versa). If this is not the case, the variables are independent of each other.
For independent events, the probability of both occurring is the product of the probabilities of each occurring:
$$P(E, F) = P(E)P(F)$$
Conditional Probability
If events are not independent, we can express conditional probability ($E$ is conditional on $F$ or what is the probability that $E$ happens given that $F$ happens):
$$P(E\ |\ F) = P(E, F)\ /\ P(F)$$
which (if $E$ and $F$ are dependent) can be written as
$$P(E, F) = P(E\ |\ F)P(F)$$
When $E$ and $F$ are independent:
$$P(E\ |\ F) = P(E)$$
Bayes's Theorem
Conditional probabilities can be "reversed":
$$P(E \text{ | } F) = P(E, F) \text{ / } P(F) = P(F \text{ | } E)P(E) \text{ / } P(F)$$
If $E$ doesn't happen:
$$P(F) = P(F, E) + P(F, \neg E)$$
Leads to Bayes's Theorem:
$$P(E\ |\ F) = P(F\ |\ E)P(E)\ /\ [P(F\ |\ E)P(E) + P(F\ |\ \neg E)P(\ \neg E)]$$
Random Variables
A random variable is one whose possible values can be placed on an associated probability distribution. The distribution refines the probabilities that the variable will take on each of the possible values.
Continuous Distributions
Coin flips represent a discrete distribution, i.e., one that takes on mutually exclusive values with no "in-betweens." A continuous distribution is one that allows for a full range of values along a continuum such as height or weight.
Continuous distributions use a probability density function (pdf) to define probability of a value within a given range.
The pdf for the uniform distribution is:
End of explanation
def uniform_cdf(x):
Returns probability that a value is <= x
if x < 0: return 0
elif x < 1: return x
else: return 1
xs = np.arange(-1, 2, .001)
ys = [uniform_cdf(x) for x in xs]
plt.step(xs, ys);
Explanation: The cumulative distribution function (cdf) gives the probability that a random variable is less than or equal to a certain value.
End of explanation
def normal_pdf(x, mu=0, sigma=1):
sqrt_two_pi = math.sqrt(2 * math.pi)
return (math.exp(-(x - mu)**2 / 2 / sigma**2) / (sqrt_two_pi * sigma))
xs = [x / 10.0 for x in range(-50, 50)]
plt.plot(xs, [normal_pdf(x, sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs, [normal_pdf(x, sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs, [normal_pdf(x, sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs, [normal_pdf(x, mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend()
plt.title("Various Normal pdfs")
plt.show()
Explanation: The Normal Distribution
The normal distribution is the definitive example of a random distribution (the classic bell curve shape). It is defined by two parameters: the mean $\mu$ and the standard deviation $\sigma$.
The function for the distribution is:
$$f(x\ |\ \mu, \sigma) = \frac{1}{\sqrt{2\pi}\sigma}\ exp\bigg(-\frac{(x - \mu)^2}{2\sigma^2}\bigg)$$
End of explanation
def normal_cdf(x, mu=0, sigma=1):
return (1 + math.erf((x - mu) / math.sqrt(2) / sigma)) / 2
plt.plot(xs, [normal_cdf(x, sigma=1) for x in xs],'-',label='mu=0,sigma=1')
plt.plot(xs, [normal_cdf(x, sigma=2) for x in xs],'--',label='mu=0,sigma=2')
plt.plot(xs, [normal_cdf(x, sigma=0.5) for x in xs],':',label='mu=0,sigma=0.5')
plt.plot(xs, [normal_cdf(x, mu=-1) for x in xs],'-.',label='mu=-1,sigma=1')
plt.legend(loc=4) # bottom right
plt.title("Various Normal cdfs")
plt.show()
def inverse_normal_cdf(p, mu=0, sigma=1, tolerance=0.00001):
find approimate inverse using binary search
# if not standard, compute standard and rescale
if mu!= 0 or sigma != 1:
return mu + sigma * inverse_normal_cdf(p, tolerance=tolerance)
low_z, low_p = -10.0, 0
hi_z, hi_p = 10.0, 1
while hi_z - low_z > tolerance:
mid_z = (low_z + hi_z) / 2
mid_p = normal_cdf(mid_z)
if mid_p < p:
low_z, low_p = mid_z, mid_p
elif mid_p > p:
hi_z, hi_p = mid_z, mid_p
else:
break
return mid_z
Explanation: When $\mu = 0$ and $\sigma = 1$ we call a distribution the standard normal distribution.
End of explanation
def bernoulli_trial(p):
return 1 if random.random() < p else 0
def binomial(n, p):
return sum(bernoulli_trial(p) for _ in range(n))
Explanation: The Central Limit Theorem
The central limit theorem states that the average of a large number of independent and identically distributed random variables is itself normally distributed.
So, if $x_1, ..., x_n$ are random and they have mean $\mu$ and standard deviation $\sigma$, then $\frac{1}{n}(x_1 +\ ...\ + x_n)$ will be normally distributed. An equivalent expression is $\frac{(x_1 +\ ...\ + x_n)\ -\ \mu n}{\sigma \sqrt{n}}$
A binomial random variable (Binomial(n, p)) is the sum of $n$ independent Bernoulli (Bernoulli(p)) random variables. Each of the variables equals 1 with a probability of $p$ and equals 0 with a probability of $1 - p$.
End of explanation
def plot_binomial(p, n, num_points):
data = [binomial(n, p) for _ in range(num_points)]
histogram = Counter(data)
plt.bar([x - 0.04 for x in histogram.keys()],
[v / num_points for v in histogram.values()],
0.8,
color='0.75')
mu = p * n
sigma = math.sqrt(n * p * (1 - p))
xs = range(min(data), max(data) + 1)
ys = [normal_cdf(i + 0.5, mu, sigma) - normal_cdf(i - 0.5, mu, sigma) for i in xs]
plt.plot(xs, ys)
plt.title('Binomial Distribution vs. Normal Approximation')
plot_binomial(0.75, 100, 10000)
Explanation: The mean of a Bernoulli(p) variable is $p$ and its standard deviation is $\sqrt{p(1 - p)}$.
End of explanation |
8,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: The Rise and Fall of the US Employment-Population Ratio
A research project at NYU's Stern School of Business.
Written by David Cai ([email protected]) under the direction of David Backus, July 2015.
Abstract
After the Great Recession, while the unemployment rate has almost returned to pre-2007 levels, the employment-population ratio has not made a similar recovery. I explore the roles of aging and other effects on employment by examining the labor force participation rate, a similar indicator that is less sensitive to cyclical variation. I also decompose the employment-population ratio into specific demographic groups to explore their contributions to the overall change.
The Employment-Population Ratio and the Unemployment Rate
Historically, for more over two decades from 1989 to 2010, the employment-population ratio has generally moved in line with the unemployment rate, albeit in an inverse direction (Figure 1). However, from 2011 onwards, these two indicators have begun to diverge. Despite the unemployment rate improving to almost pre-recession levels, the employment-population ratio has failed to increase by the same amount. This finding indicates that past 2011, some component of the employment-population ratio other than the unemployment rate must have changed.
Mathematically, the employment-population ratio can be decomposed into the product of the labor force participation rate and the employment rate of the labor force. Alternatively, the employment-population ratio can be represented as the labor force participation rate multiplied by one minus the unemployment rate. Since the unemployment rate before the crisis has been roughly equal to its level today, the change in the labor force participation rate represents the largest contribution to the decline in the employment-population ratio.
Step2: Source
Step3: Source
Step4: Source | Python Code:
Creates a figure using FRED data
Uses pandas Remote Data Access API
Documentation can be found at http://pandas.pydata.org/pandas-docs/stable/remote_data.html
%matplotlib inline
import pandas as pd
import pandas.io.data as web
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
from dateutil.relativedelta import relativedelta
start, end = dt.datetime(1989, 1, 1), dt.datetime(2015, 6, 1) # Set the date range of the data
data = web.DataReader(['EMRATIO', 'UNRATE', 'USREC'],'fred', start, end) # Choose data series you wish to download
data.columns = ['Empl Pop Ratio', 'Unemployment Rate', 'Recession']
plt.figure(figsize=plt.figaspect(0.5))
data['Empl Pop Ratio'].plot()
plt.xlabel('')
plt.text(dt.datetime(1990, 1, 1), 64.25, 'Employment-', fontsize=11, weight='bold')
plt.text(dt.datetime(1990, 1, 1), 63.75, 'Population Ratio', fontsize=11, weight='bold')
data['Unemployment Rate'].plot(secondary_y=True, color = 'r')
plt.text(dt.datetime(1990, 1, 1), 4, 'Unemployment Rate', fontsize=11, weight='bold')
def get_recession_months():
rec_dates = data['Recession']
one_vals = np.where(rec_dates == 1)
rec_startind = rec_dates.index[one_vals]
return rec_startind
def shade_recession(dates):
for date in dates:
plt.axvspan(date, date+relativedelta(months=+1), color='gray', alpha=0.1, lw=0)
shade_recession(get_recession_months())
plt.suptitle('Figure 1. Employment-Population Ratio and Unemployment, 1989-2015', fontsize=12, weight='bold')
plt.show()
Explanation: The Rise and Fall of the US Employment-Population Ratio
A research project at NYU's Stern School of Business.
Written by David Cai ([email protected]) under the direction of David Backus, July 2015.
Abstract
After the Great Recession, while the unemployment rate has almost returned to pre-2007 levels, the employment-population ratio has not made a similar recovery. I explore the roles of aging and other effects on employment by examining the labor force participation rate, a similar indicator that is less sensitive to cyclical variation. I also decompose the employment-population ratio into specific demographic groups to explore their contributions to the overall change.
The Employment-Population Ratio and the Unemployment Rate
Historically, for more over two decades from 1989 to 2010, the employment-population ratio has generally moved in line with the unemployment rate, albeit in an inverse direction (Figure 1). However, from 2011 onwards, these two indicators have begun to diverge. Despite the unemployment rate improving to almost pre-recession levels, the employment-population ratio has failed to increase by the same amount. This finding indicates that past 2011, some component of the employment-population ratio other than the unemployment rate must have changed.
Mathematically, the employment-population ratio can be decomposed into the product of the labor force participation rate and the employment rate of the labor force. Alternatively, the employment-population ratio can be represented as the labor force participation rate multiplied by one minus the unemployment rate. Since the unemployment rate before the crisis has been roughly equal to its level today, the change in the labor force participation rate represents the largest contribution to the decline in the employment-population ratio.
End of explanation
start, end = dt.datetime(1976, 1, 1), dt.datetime(2015, 3, 1)
data = web.DataReader(['CIVPART', 'USREC'], 'fred', start, end)
data.columns = ['LFPR', 'Recession']
plt.figure(figsize=plt.figaspect(0.5))
data['LFPR'].plot(color = 'k')
plt.xlabel('')
shade_recession(get_recession_months())
plt.suptitle('Figure 2. Labor Force Participation Rate, 1976-2015', fontsize=12, fontweight='bold')
plt.show()
Explanation: Source: Figure created using data from the Bureau of Labor Statistics (BLS) accessed through the Federal Reserve Economic Data (FRED). This graph is updated from Moffitt (2012)’s Figure 2. Recession data is from NBER accessed through FRED.
Labor Force Participation
Since 1976, the labor force participation rate has trended upwards until hitting a peak around 2000 (Figure 2). Aaronson et al. (2006) note that this trend can be extended back to the early 1960s, with labor force participation rising from less than 60 percent its peak of 67.3 percent in 2000. After 2000, a reversal of the previous trend emerged, with a new trend of labor force decline until today. Aaronson et al. point out that a prolonged decline in labor force participation is unprecedented in the postwar era, thus leading observers to wonder if long-term structural changes in the labor market have occurred.
After the publication of the 2006 paper, the labor force participation rate has continued to fall. Revisiting this issue, Aaronson et al. (2014) examine the decline in labor force participation from 2007 onwards. They attempt to break down the factors contributing to this decline into structural and cyclical components. The authors find that 1.3 percent, or nearly one half, of the 2.8 percent decline in the aggregate participation rate can be attributable population aging. Moreover, they note the contributions of declines in specific age/sex categories, such as among youth and adult men. Finally, they discover the existence of a cyclical component; however, its magnitude is much more uncertain. Of these three components, population aging represents the largest contributor to the labor force participation decline.
End of explanation
#file = '/Users/davidcai/lfpr.csv'
file = 'https://raw.githubusercontent.com/DaveBackus/Data_Bootcamp/master/Code/Projects/lfpr.csv'
df = pd.read_csv(file, index_col=0)
start, end = dt.datetime(1980, 1, 1), dt.datetime(2010, 1, 1)
data = web.DataReader('USREC', 'fred', start, end)
data.columns=['Recession']
# Take a simple averages of ratios for men and women
df["Age 62"] = df[["M62-64", "W62-64"]].mean(axis=1)
df["Age 65"] = df[["M65-69", "W65-69"]].mean(axis=1)
df["Age 70"] = df[["M70-74", "W70-74"]].mean(axis=1)
df["Age 75"] = df[["M75-79", "W75-79"]].mean(axis=1)
# Convert years into datetime series
df.index = df.index.astype(str) + "-1-1"
df.index = pd.to_datetime(df.index)
plt.figure(figsize=(plt.figaspect(0.5)))
df["Age 62"].plot()
df["Age 65"].plot()
df["Age 70"].plot()
df["Age 75"].plot()
plt.text(dt.datetime(2007, 1, 1), 42, 'Age 62', fontsize=11, weight='bold')
plt.text(dt.datetime(2007, 1, 1), 25, 'Age 65', fontsize=11, weight='bold')
plt.text(dt.datetime(2007, 1, 1), 15, 'Age 70', fontsize=11, weight='bold')
plt.text(dt.datetime(2007, 1, 1), 6, 'Age 75', fontsize=11, weight='bold')
shade_recession(get_recession_months())
plt.suptitle('Figure 3. Labor Force Participation Rates, By Age, 1980-2010', fontsize=12, fontweight='bold')
plt.show()
Explanation: Source: Figure created using data from the Bureau of Labor Statistics (BLS) accessed through the Federal Reserve Economic Data (FRED). This graph is adapted from Aaronson et al. (2014)’s Figure 9. Recession data is from NBER accessed through FRED.
Changes in the Age Distribution
As population aging is the largest contributor to the labor force participation decline, further analysis is necessary to understand its nature. Aaronson et al. (2014) observe that the proportion of the working age population reported as retired in the Current Population Survey (CPS) has increased by more than one percent in 2014 compared to 2007, accounting for the majority of the 1.3 percent effect of aging. The authors argue that this change is the result of a shift of the age distribution of the population, as the leading edge of the baby boom generation reaches age 62. However, on the contrary, within-age participation rates have increased since 2007, making a positive contribution to total labor force participation (Figure 3). Aaronson et al. (2014) make a similar finding, observing that within-age retirement rates have decreased, likely due to changes in social security and pensions, increased education levels, and longer life spans. These same factors can also explain the increase in the within-age participation rates among older cohorts. That said, the most important implication of Figure 3 is that labor force participation rates decrease with age. As the population age distribution shifts towards older ages, overall labor force participation can be expected to decrease.
End of explanation
start, end = dt.datetime(1970, 1, 1), dt.datetime(2015, 3, 1)
data = web.DataReader(['LNS12300001', 'EMRATIO','LNS12300002', 'USREC'], 'fred', start, end)
data.columns=['Men', 'Overall', 'Women', 'Recession']
plt.figure(figsize=plt.figaspect(0.5))
data["Men"].plot()
data["Overall"].plot()
data["Women"].plot()
plt.xlabel('')
plt.text(dt.datetime(1971, 1, 1), 71, 'Men', fontsize=11, weight='bold')
plt.text(dt.datetime(1971, 1, 1), 52, 'Overall', fontsize=11, weight='bold')
plt.text(dt.datetime(1971, 1, 1), 37, 'Women', fontsize=11, weight='bold')
shade_recession(get_recession_months())
plt.suptitle('Figure 4. Employment Population Ratios, Overall and by Sex, 1970-2015', fontsize=12, fontweight='bold')
plt.show()
Explanation: Source: Figure created using author's calculations, working on calculations from Leonesio et al. (2012), available at http://www.ssa.gov/policy/docs/ssb/v72n1/v72n1p59-text.html#chart1. Data is originally from Current Population Survey (CPS) monthly files. Recession data is from NBER accessed through FRED.
Notes: I employ a oversimplification by taking a simple average of male and female participation rates to determine overall participation rates.
Demographic Specific Employment Trends
In addition to examining the contribution of the labor force participation rate in order to explain the decline in the employment-population ratio, an alternative approach is possible. Moffitt (2012) decomposes the aggregate employment-population ratio into contributions from specific demographic groups. After breaking down the overall employment population ratio into ratios for men and women, Moffitt observes different employment trends between the sexes (Figure 4). For men, he notes, on average, the ratio declined from 1970 to 1983, remained constant from 1983 to 2000, and continued to fall from 2000 onwards. For women, the ratio increased from 1970 to 2000 but began to decrease from 2000 onwards. Moffitt observes that men's wages declined from 1999-2007 while women's wages increased over the same period, which may account for differences in their employment trends. Moffitt concludes that while that "about half of the decline [in participation rate] among men can be explained by declines in wage rates and by changes in nonlabor income and family structure,” the factors contributing to the employment decline among women are less clear. Moreover, after considering other proposed factors as taxes and government transfers, Moffitt finds their contributions insignificant and unlikely to explain the employment decline.
End of explanation |
8,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Electric Field of a Moving Charge
PROGRAM
Step1: 2 - Define Constants
To see what happens when the speed $\beta$ of the charge changes, modify the value of beta below.
Step2: 3 - Calculate Total Electric Field Magnitude
By drawing the vectors in the problem, and relevant triangles, calculate the magnitude of the electric field $E(x, y, z, t)$ at a point $(x, y, z)$ for a certain time $t$. Define a function that will do this calculation for any point in space and at any time.
Step3: 4 - Calculate Electric Field Components' Magnitude
Step4: 5 - Plot Electric Field in Three Dimensions
The magnitude of the electric field is exaggerated so that it is visible. | Python Code:
import numpy as np
import matplotlib.pylab as plt
#Import 3-dimensional plotting package.
from mpl_toolkits.mplot3d import axes3d
Explanation: Electric Field of a Moving Charge
PROGRAM: Electric field of a moving charge
CREATED: 5/30/2018
In this problem, I plot the electric field of a moving charge for different speeds $\beta = v/c$. The charge is moving along the x-axis.
- In step 1, I import a package for plotting in 3 dimensions.
- In step 2, I define the contants in the problem (in compatible units, $m$, $s$, $kg$, $C$).
- For the charge $q$, I use the charge of an electron.
- $\epsilon_{0}$ is the permittivity of free space.
- The speed of the charge $\beta$ is a value between 0 and 1, and can be changed to see what happens to the electric field.
- $c$ is the speed of light.
- $v$ is velocity of the charge in $\frac{m}{s}$, calculated by $v = \beta c$.
- In step 3, I define a function to calculate the magnitude of the electric field. The electric field vector is
$\vec{E}(\vec{r}, t) = \frac{q}{4 \pi \epsilon_{0}} \frac{1 - \beta^{2}}{(1 - \beta^{2}sin^2(\theta))^{3/2}} \frac{\hat{R}}{R^{2}}$, so this function calculates $E(\vec{r}, t) = \frac{q}{4 \pi \epsilon_{0}} \frac{1 - \beta^{2}}{R^{2} (1 - \beta^{2}sin^2(\theta))^{3/2}}$.
- In step 4, having calculated the direction vector $\hat{R}$ by hand, by drawing pictures, I define a function to calculate the magnitude of the x-component, y-component, and z-component of the electric field. Since $\hat{R} = (\frac{x - v_{x}t}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} }, \frac{y}{\sqrt{ (x - v_{x}t)^2 + y^{2} + z^{2}} }, \frac{z}{ \sqrt{(x - v_{x}t)^2 + y^{2} + z^{2}} })$, the electric field components are
- $E_{x} = E(\vec{r}, t) \frac{x - v_{x}t}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} } = (\frac{q}{4 \pi \epsilon_{0}} \frac{1 - \beta^{2}}{R^{2} (1 - \beta^{2}sin^2(\theta))^{3/2}}) \frac{x - v_{x}t}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} }$
- $E_{y} = E(\vec{r}, t) \frac{y}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} } = (\frac{q}{4 \pi \epsilon_{0}} \frac{1 - \beta^{2}}{R^{2} (1 - \beta^{2}sin^2(\theta))^{3/2}}) \frac{y}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} }$
- $E_{z} = E(\vec{r}, t) \frac{z}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} } = (\frac{q}{4 \pi \epsilon_{0}} \frac{1 - \beta^{2}}{R^{2} (1 - \beta^{2}sin^2(\theta))^{3/2}}) \frac{z}{ \sqrt{(x - v_{x}t)^{2} + y^{2} + z^{2}} }$
- In step 5, I plot the electric field at time $t = 0.000000005$ seconds for the moving charge. The time was chosen so that the charge, which is moving close to the speed of light, has not moved very far from the origin outside my chosen plot range. The magnitude of the electric field is highly exaggerated, so that the vectors are visible. (Each component is multiplied by $10^{11}$ $\frac{N}{C}$.)
1 - Import Packages
End of explanation
#Define constants - charge of an electron, permittivity of free space, velocity relative to speed of light.
q = 1.602 * 10**(-19)
e_0 = 8.854 * 10**(-12)
beta = 0.95
c = 2.997925 * 10**8
v = beta * c
Explanation: 2 - Define Constants
To see what happens when the speed $\beta$ of the charge changes, modify the value of beta below.
End of explanation
#Define magnitude of electric field as a function.
def E(x, y, z, t):
r = np.sqrt(x**2 + y**2 + z**2)
R = np.sqrt(r**2 + (v*t)**2 - 2 * r * (v*t) * x/np.sqrt(x**2 + y**2 + z**2))
sin_theta = np.sqrt(y**2 + z**2) / R
return q/(4*np.pi*e_0) * ((1 - beta**2)/(R**2 * (1 - beta**2 * sin_theta**2)**(3/2)))
Explanation: 3 - Calculate Total Electric Field Magnitude
By drawing the vectors in the problem, and relevant triangles, calculate the magnitude of the electric field $E(x, y, z, t)$ at a point $(x, y, z)$ for a certain time $t$. Define a function that will do this calculation for any point in space and at any time.
End of explanation
#Define magnitude of electric field in x, y, and z directions.
def E_x(x, y, z, t):
return E(x, y, z, t) * (x - v*t)/np.sqrt((x - v*t)**2 + y**2 + z**2)
def E_y(x, y, z, t):
return E(x, y, z, t) * (y)/np.sqrt((x - v*t)**2 + y**2 + z**2)
def E_z(x, y, z, t):
return E(x, y, z, t) * (z)/np.sqrt((x - v*t)**2 + y**2 + z**2)
Explanation: 4 - Calculate Electric Field Components' Magnitude
End of explanation
#Make a three-dimensional plot of the electric field.
fig = plt.figure(figsize = (8, 8))
ax = fig.gca(projection = '3d')
#Make a grid of points where vectors of the vector field are placed.
x_lim = 20
y_lim = 20
z_lim = 20
n = 5
X, Y, Z = np.meshgrid(np.arange(-x_lim, x_lim, n),
np.arange(-y_lim, y_lim, n),
np.arange(-z_lim, z_lim, n))
#Choose a time (in seconds) to plot the electric field of the charge, where the charge is at the origin for t = 0.
t = 0.000000005
#Write the vector components. Multiply by 10^11 so that the vectors are visible.
U = E_x(X, Y, Z, t) * 10**11
V = E_y(X, Y, Z, t) * 10**11
W = E_z(X, Y, Z, t) * 10**11
#Plot the vector field.
ax.quiver(X, Y, Z, U, V, W)
#Plot the x-axis, y-axis, and z-axis
X_0 = 1000*[0]
Y_0 = 1000*[0]
Z_0 = 1000*[0]
X_axis = np.linspace(-x_lim, x_lim, 1000)
ax.plot(X_axis, Y_0, Z_0, color = 'k', linewidth = 1, alpha = 0.5)
Y_axis = np.linspace(-y_lim, y_lim, 1000)
ax.plot(X_0, Y_axis, Z_0, color = 'k', linewidth = 1, alpha = 0.5)
Z_axis = np.linspace(-z_lim, z_lim, 1000)
ax.plot(X_0, Y_0, Z_axis, color = 'k', linewidth = 1, alpha = 0.5)
#Plot the charge, moving along the x-axis.
ax.plot([v*t], [0], [0], marker = 'o', markerfacecolor = 'k', markeredgecolor = 'None', alpha = 0.8)
#Adjust the viewing angle of the plot.
ax.view_init(elev = 20, azim = 275)
#Label the plot.
ax.set_xlabel('x (meters)')
ax.set_ylabel('y (meters)')
ax.set_zlabel('z (meters)')
ax.set_title('Electric Field of a Charge Moving at Constant Velocity, $\\beta = 0.95$')
#plt.savefig('Electric Field of a Charge Moving at Constant Velocity, B = 0.95.png')
plt.show()
Explanation: 5 - Plot Electric Field in Three Dimensions
The magnitude of the electric field is exaggerated so that it is visible.
End of explanation |
8,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
(📗) ipyrad Cookbook
Step1: Connect to cluster
The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the ipyparallel library (see our ipyparallel tutorial). An ipcluster instance must be started for you to connect to, which can be started by running 'ipcluster start' in a terminal.
Step2: Load in your .loci data file and a tree hypothesis
Step3: Short tutorial
Step4: Look at the results
Step5: Plotting and interpreting results
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (test 33). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 14, 20, 29, 32).
Step6: Full Tutorial
Creating a baba object
The fundamental object for running abba-baba tests is the ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus=4), to maximize the amount of data available for any test. Once an initial baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
Step7: Linking tests to the baba object
The next thing we need to do is to link a 'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba object named 'cc' below we enter two tests using a list to show how multiple tests can be linked to a single baba object.
Step8: Other parameters
Each baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2. However, for the test above setting mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba object 'bb'.
Step9: Running the tests
When you execute the 'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient object. The results of the tests will be stored in your baba object under the attributes 'results_table' and 'results_boots'.
Step10: The results table
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table, which can be easily accessed and manipulated. The tests are listed in order and can be reference by their 'index' (the number in the left-most column). For example, below we see the results for object 'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them as 'cc.tests[0]' or 'cc.tests[1]'. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
Step11: Auto-generating tests
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
Step12: More about input file paths (i/o)
The default (required) input data file is the .loci file produced by ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
Step13: Interpreting results
You can see in the results_table below that the D-statistic ranges between 0.2 and 0.4 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above). | Python Code:
import ipyrad.analysis as ipa
import ipyparallel as ipp
import toytree
Explanation: (📗) ipyrad Cookbook: abba-baba admixture tests
The ipyrad.analysis Python module includes functions to calculate abba-baba admixture statistics (including several variants of these measures), to perform signifance tests, and to produce plots of results. All code in this notebook is written in Python, which you can copy/paste into an IPython terminal to execute, or, preferably, run in a Jupyter notebook like this one. See the other analysis cookbooks for instructions on using Jupyter notebooks. All of the software required for this tutorial is included with ipyrad (v.6.12+). Finally, we've written functions to generate plots for summarizing and interpreting results.
Load packages
End of explanation
ipyclient = ipp.Client()
len(ipyclient)
Explanation: Connect to cluster
The code can be easily parallelized across cores on your machine, or many nodes of an HPC cluster using the ipyparallel library (see our ipyparallel tutorial). An ipcluster instance must be started for you to connect to, which can be started by running 'ipcluster start' in a terminal.
End of explanation
locifile = "./analysis-ipyrad/pedic_outfiles/pedic.loci"
newick = "./analysis-raxml/RAxML_bestTree.pedic"
## parse the newick tree, re-root it, and plot it.
tre = toytree.tree(newick=newick)
tre.root(wildcard="prz")
tre.draw(node_labels=True, node_size=10);
## store rooted tree back into a newick string.
newick = tre.tree.write()
Explanation: Load in your .loci data file and a tree hypothesis
End of explanation
## create a baba object linked to a data file and newick tree
bb = ipa.baba(data=locifile, newick=newick)
## generate all possible abba-baba tests meeting a set of constraints
bb.generate_tests_from_tree(
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["33413_thamno"],
})
## run all tests linked to bb
bb.run(ipyclient)
Explanation: Short tutorial: calculating abba-baba statistics
To give a gist of what this code can do, here is a quick tutorial version, each step of which we explain in greater detail below. We first create a 'baba' analysis object that is linked to our data file, in this example we name the variable bb. Then we tell it which tests to perform, here by automatically generating a number of tests using the generate_tests_from_tree() function. And finally, we calculate the results and plot them.
End of explanation
## save the results table to a csv file
bb.results_table.to_csv("bb.abba-baba.csv", sep="\t")
## show the results table in notebook
bb.results_table
Explanation: Look at the results
End of explanation
## plot the results, showing here some plotting options.
bb.plot(height=900,
width=600,
pct_tree_y=0.1,
ewidth=2,
alpha=4.,
style_test_labels={"font-size":"10px"},
);
Explanation: Plotting and interpreting results
Interpreting the results of D-statistic tests is actually very complicated. You cannot treat every test as if it were independent because introgression between one pair of species may cause one or both of those species to appear as if they have also introgressed with other taxa in your data set. This problem is described in great detail in this paper (Eaton et al. 2015). A good place to start, then, is to perform many tests and focus on those which have the strongest signal of admixture. Then, perform additional tests, such as partitioned D-statistics (described further below) to tease apart whether a single or multiple introgression events are likely to have occurred.
In the example plot below we find evidence of admixture between the sample 33413_thamno (black) with several other samples, but the signal is strongest with respect to 30556_thamno (test 33). It also appears that admixture is consistently detected with samples of (40578_rex & 35855_rex) when contrasted against 35236_rex (tests 14, 20, 29, 32).
End of explanation
## create an initial object linked to your data in 'locifile'
aa = ipa.baba(data=locifile)
## create two other copies
bb = aa.copy()
cc = aa.copy()
## print these objects
print aa
print bb
print cc
Explanation: Full Tutorial
Creating a baba object
The fundamental object for running abba-baba tests is the ipa.baba() object. This stores all of the information about the data, tests, and results of your analysis, and is used to generate plots. If you only have one data file that you want to run many tests on then you will only need to enter the path to your data once. The data file must be a '.loci' file from an ipyrad analysis. In general, you will probably want to use the largest data file possible for these tests (min_samples_locus=4), to maximize the amount of data available for any test. Once an initial baba object is created you create different copies of that object that will inherit its parameter setttings, and which you can use to perform different tests on, like below.
End of explanation
aa.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["29154_superba"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
bb.tests = {
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["30686_cyathophylla"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
}
cc.tests = [
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41954_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
{
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["41478_cyathophylloides"],
"p2": ["33413_thamno"],
"p1": ["40578_rex"],
},
]
Explanation: Linking tests to the baba object
The next thing we need to do is to link a 'test' to each of these objects, or a list of tests. In the Short tutorial above we auto-generated a list of tests from an input tree, but to be more explicit about how things work we will write out each test by hand here. A test is described by a Python dictionary that tells it which samples (individuals) should represent the 'p1', 'p2', 'p3', and 'p4' taxa in the ABBA-BABA test. You can see in the example below that we set two samples to represent the outgroup taxon (p4). This means that the SNP frequency for those two samples combined will represent the p4 taxon. For the baba object named 'cc' below we enter two tests using a list to show how multiple tests can be linked to a single baba object.
End of explanation
## print params for object aa
aa.params
## set the mincov value as a dictionary for object bb
bb.params.mincov = {"p4":2, "p3":1, "p2":1, "p1":1}
bb.params
Explanation: Other parameters
Each baba object has a set of parameters associated with it that are used to filter the loci that will be used in the test and to set some other optional settings. If the 'mincov' parameter is set to 1 (the default) then loci in the data set will only be used in a test if there is at least one sample from every tip of the tree that has data for that locus. For example, in the tests above where we entered two samples to represent "p4" only one of those two samples needs to be present for the locus to be included in our analysis. If you want to require that both samples have data at the locus in order for it to be included in the analysis then you could set mincov=2. However, for the test above setting mincov=2 would filter out all of the data, since it is impossible to have a coverage of 2 for 'p3', 'p2', and 'p1', since they each have only one sample. Therefore, you can also enter the mincov parameter as a dictionary setting a different minimum for each tip taxon, which we demonstrate below for the baba object 'bb'.
End of explanation
## run tests for each of our objects
aa.run(ipyclient)
bb.run(ipyclient)
cc.run(ipyclient)
Explanation: Running the tests
When you execute the 'run()' command all of the tests for the object will be distributed to run in parallel on your cluster (or the cores available on your machine) as connected to your ipyclient object. The results of the tests will be stored in your baba object under the attributes 'results_table' and 'results_boots'.
End of explanation
## you can sort the results by Z-score
cc.results_table.sort_values(by="Z", ascending=False)
## save the table to a file
cc.results_table.to_csv("cc.abba-baba.csv")
## show the results in notebook
cc.results_table
Explanation: The results table
The results of the tests are stored as a data frame (pandas.DataFrame) in results_table, which can be easily accessed and manipulated. The tests are listed in order and can be reference by their 'index' (the number in the left-most column). For example, below we see the results for object 'cc' tests 0 and 1. You can see which taxa were used in each test by accessing them as 'cc.tests[0]' or 'cc.tests[1]'. An even better way to see which individuals were involved in each test, however, is to use our plotting functions, which we describe further below.
End of explanation
## create a new 'copy' of your baba object and attach a treefile
dd = bb.copy()
dd.newick = newick
## generate all possible tests
dd.generate_tests_from_tree()
## a dict of constraints
constraint_dict={
"p4": ["32082_przewalskii", "33588_przewalskii"],
"p3": ["40578_rex", "35855_rex"],
}
## generate tests with contraints
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=False,
)
## 'exact' contrainst are even more constrained
dd.generate_tests_from_tree(
constraint_dict=constraint_dict,
constraint_exact=True,
)
## run the dd tests
dd.run(ipyclient)
dd.plot(height=500, pct_tree_y=0.2, alpha=4., tree_style='c');
dd.results_table
Explanation: Auto-generating tests
Entering all of the tests by hand can be pain, which is why we wrote functions to auto-generate tests given an input rooted tree, and a number of contraints on the tests to generate from that tree. It is important to add constraints on the tests otherwise the number that can be produced becomes very large very quickly. Calculating results runs pretty fast, but summarizing and interpreting thousands of results is pretty much impossible, so it is generally better to limit the tests to those which make some intuitive sense to run. You can see in this example that implementing a few contraints reduces the number of tests from 1608 to 13.
End of explanation
## path to a locifile created by ipyrad
locifile = "./analysis-ipyrad/pedicularis_outfiles/pedicularis.loci"
## path to an unrooted tree inferred with tetrad
newick = "./analysis-tetrad/tutorial.tree"
Explanation: More about input file paths (i/o)
The default (required) input data file is the .loci file produced by ipyrad. When performing D-statistic calculations this file will be parsed to retain the maximal amount of information useful for each test.
An additional (optional) file to provide is a newick tree file. While you do not need a tree in order to run ABBA-BABA tests, you do need at least need a hypothesis for how your samples are related in order to setup meaningful tests. By loading in a tree for your data set we can use it to easily set up hypotheses to test, and to plot results on the tree.
End of explanation
print cc.results_table
Explanation: Interpreting results
You can see in the results_table below that the D-statistic ranges between 0.2 and 0.4 in these tests. These values are not too terribly informative, and so we instead generally focus on the Z-score representing how far the distribution of D-statistic values across bootstrap replicates deviates from its expected value of zero. The default number of bootstrap replicates to perform per test is 1000. Each replicate resamples nloci with replacement.
In these tests ABBA and BABA occurred with pretty equal frequency. The values are calculated using SNP frequencies, which is why they are floats instead of integers, and this is also why we were able to combine multiple samples to represent a single tip in the tree (e.g., see the test we setup, above).
End of explanation |
8,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Downloading Continuous Data
This notebook demonstrates the use of EQTransformer for downloading continuous data from seismic networks.
Step1: You can use help() to learn about input parameters of each fuunction. For instance
Step2: 1) Finding the availabel stations
Defining the location and time period of interest
Step3: You can limit your data types (e.g. broadband, short period, or strong motion) of interest
Step4: This will download the information on the stations that are available based on your search criteria. You can filter out the networks or stations that you are not interested in, you can find the name of the appropriate client for your request from here
Step5: A jason file ("stataions_list.json") should have been created in your current directory. This contains information for the available stations (i.e. 4 stations in this case). Next, you can download the data for the available stations using the following function and script. This may take a few minutes.
2) Downloading the data
You can define multipel clients as the source | Python Code:
from EQTransformer.utils.downloader import makeStationList, downloadMseeds
Explanation: Downloading Continuous Data
This notebook demonstrates the use of EQTransformer for downloading continuous data from seismic networks.
End of explanation
help(makeStationList)
Explanation: You can use help() to learn about input parameters of each fuunction. For instance:
End of explanation
MINLAT=35.50
MAXLAT=35.60
MINLON=-117.80
MAXLON=-117.40
STIME="2019-09-01 00:00:00.00"
ETIME="2019-09-02 00:00:00.00"
Explanation: 1) Finding the availabel stations
Defining the location and time period of interest:
End of explanation
CHANLIST=["HH[ZNE]", "HH[Z21]", "BH[ZNE]", "EH[ZNE]", "SH[ZNE]", "HN[ZNE]", "HN[Z21]", "DP[ZNE]"]
Explanation: You can limit your data types (e.g. broadband, short period, or strong motion) of interest:
End of explanation
makeStationList(client_list=["SCEDC"],
min_lat=MINLAT,
max_lat=MAXLAT,
min_lon=MINLON,
max_lon=MAXLON,
start_time=STIME,
end_time=ETIME,
channel_list=CHANLIST,
filter_network=["SY"],
filter_station=[])
Explanation: This will download the information on the stations that are available based on your search criteria. You can filter out the networks or stations that you are not interested in, you can find the name of the appropriate client for your request from here:
End of explanation
downloadMseeds(client_list=["SCEDC", "IRIS"],
stations_json='station_list.json',
output_dir="downloads_mseeds",
start_time=STIME,
end_time=ETIME,
min_lat=MINLAT,
max_lat=MAXLAT,
min_lon=MINLON,
max_lon=MAXLON,
chunk_size=1,
channel_list=[],
n_processor=2)
Explanation: A jason file ("stataions_list.json") should have been created in your current directory. This contains information for the available stations (i.e. 4 stations in this case). Next, you can download the data for the available stations using the following function and script. This may take a few minutes.
2) Downloading the data
You can define multipel clients as the source:
End of explanation |
8,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Importar datos de entreno
Step1: Importar datos para predecir
Step2: Mapear los valores verdadero y falso a 1 y 0
hayErrPalabra, falloCaracter, palabraCorrecta
Step3: Quitarle los espacios en blanco al usuario
Step4: Mapear el usuario en un campo usuarioID
Step5: Dejar solo los caracteres comprendidos entre A y Z
Cuidado al hacer los tiempos de palabra, que se borran las filas que los contienen
Step6: (Mirar si interesa hacer o no)
Mapear cada palabra a un numero para poder entrenar
Primero se crea un diccionar almacenando cada valor unico y luego se recorre cambiado los valores
Step7: (Mirar si interesa hacer o no)
Mapear cada caracter a un numero para poder entrenar
Primero se crea un diccionar almacenando cada valor unico y luego se recorre cambiado los valores
Step8: Sacar tiempo medio de escritura del mismo caracter
Hay que quitar los caracteres nulos
Step9: Sacar tiempo medio de pulsado de enter (caracteres nulos)
Step10: Usuario, palabra, tiempo
Step11: Sacar la suma de fallos totales por palabra
Step12: Prueba tiempo correccion caracter
Step13: Error en el entreno, hay un tiempo negativo, MIRAR
Step14: Sacar el tiempoPalabra medio de cada palabra del usuario para usarlo como modelo
TiempoErrPalabra no se si es muy util
Step15: Sacar tiempo medio por caracter por tamaño de palabra
Step16: Sacar el target
Step17: Eliminar campos sobrantes (Usuario, palabra, palabraLeida, numPalabra, tamPalabra, caracter, usuarioID)
Step18: Cambiar datos malos por las mejoras
Step19: Separar datos de entreno y datos de testeo
Cross Validation
Random Forest
Step20: SVM
Step21: AdaBoost
datos originales
Step22: Pruebas con otro modelo
Step23: Pruebas con modelo tiempo medio por caracter
Step24: Entreno del modelo caracter, con los datos sin el target
Step25: Prediccion modelo sin Cross Validation | Python Code:
data = pd.read_csv('train.csv', header=None ,delimiter=";")
feature_names = ['usuario', 'palabra', 'palabraLeida', 'tiempoCaracter',
'hayErrPalabra', 'tiempoErrPalabra', 'numPalabra','tiempoPalabra', 'tamPalabra', 'caracter',
'falloCaracter', 'palabraCorrecta']
data.columns = feature_names
Explanation: Importar datos de entreno
End of explanation
predict = pd.read_csv('predict.csv', header=None ,delimiter=";")
feature_names = ['usuario', 'palabra', 'palabraLeida', 'tiempoCaracter',
'hayErrPalabra', 'tiempoErrPalabra', 'numPalabra','tiempoPalabra', 'tamPalabra', 'caracter',
'falloCaracter', 'palabraCorrecta']
predict.columns = feature_names
data[data['caracter'] == 'Z']
Explanation: Importar datos para predecir
End of explanation
# Pasamos de boolean a un int, 1 para true y 0 para false
data["hayErrPalabra"] = data['hayErrPalabra'].map({False: 0, True: 1})
data["falloCaracter"] = data['falloCaracter'].map({False: 0, True: 1})
data["palabraCorrecta"] = data['palabraCorrecta'].map({False: 0, True: 1})
predict["hayErrPalabra"] = predict['hayErrPalabra'].map({False: 0, True: 1})
predict["falloCaracter"] = predict['falloCaracter'].map({False: 0, True: 1})
predict["palabraCorrecta"] = predict['palabraCorrecta'].map({False: 0, True: 1})
Explanation: Mapear los valores verdadero y falso a 1 y 0
hayErrPalabra, falloCaracter, palabraCorrecta
End of explanation
data["usuario"] = data["usuario"].str.strip()
predict["usuario"] = predict["usuario"].str.strip()
Explanation: Quitarle los espacios en blanco al usuario
End of explanation
data["usuarioID"] = data['usuario'].map({"Cristhian": 0, "Jesus": 1})
predict["usuarioID"] = predict['usuario'].map({"Cristhian": 0, "Jesus": 1})
Explanation: Mapear el usuario en un campo usuarioID
End of explanation
data['caracter'] = data[data['caracter'].between('A', 'Z', inclusive=True)]['caracter']
predict['caracter'] = predict[predict['caracter'].between('A', 'Z', inclusive=True)]['caracter']
Explanation: Dejar solo los caracteres comprendidos entre A y Z
Cuidado al hacer los tiempos de palabra, que se borran las filas que los contienen
End of explanation
d = {ni: indi for indi, ni in enumerate(set(data['palabra']))}
data['palabra'] = [d[ni] for ni in data['palabra']]
d = {ni: indi for indi, ni in enumerate(set(predict['palabra']))}
predict['palabra'] = [d[ni] for ni in predict['palabra']]
Explanation: (Mirar si interesa hacer o no)
Mapear cada palabra a un numero para poder entrenar
Primero se crea un diccionar almacenando cada valor unico y luego se recorre cambiado los valores
End of explanation
d = {ni: indi for indi, ni in enumerate(set(data['caracter']))}
data['caracter'] = [d[ni] for ni in data['caracter']]
d = {ni: indi for indi, ni in enumerate(set(predict['caracter']))}
predict['caracter'] = [d[ni] for ni in predict['caracter']]
Explanation: (Mirar si interesa hacer o no)
Mapear cada caracter a un numero para poder entrenar
Primero se crea un diccionar almacenando cada valor unico y luego se recorre cambiado los valores
End of explanation
caracter = data[~data['caracter'].isnull()][['usuario', 'caracter','tiempoCaracter','falloCaracter']]
caracter['user'] = data['usuarioID']
caracter = caracter.groupby(['usuario','caracter']).mean()
targerCaracter = caracter['user']
caracter = caracter.drop(['user'], axis=1)
#caracter.iloc[0:3]
caracter
caracterPred = predict[~predict['caracter'].isnull()][['usuario', 'caracter','tiempoCaracter','falloCaracter']]
caracterPred['user'] = predict['usuarioID']
caracterPred = caracterPred.groupby(['usuario','caracter']).mean()
targerCaracterPred = caracterPred['user']
caracterPred = caracterPred.drop(['user'], axis=1)
#caracterPred.iloc[0:3]
caracterPred
Explanation: Sacar tiempo medio de escritura del mismo caracter
Hay que quitar los caracteres nulos
End of explanation
Enter = data[data['caracter'].isnull()][['usuario','tiempoCaracter']]
Enter.columns = ['usuario', 'tiempoEnter']
Enter = Enter.groupby(['usuario']).mean()
Enter
Explanation: Sacar tiempo medio de pulsado de enter (caracteres nulos)
End of explanation
usPalTiempo = data[data['caracter'].isnull()][['usuario', 'palabra', 'tiempoPalabra', 'tiempoErrPalabra','tamPalabra']]
usPalTiempo
usPalTiempoPred = predict[predict['caracter'].isnull()][['usuario', 'palabra', 'tiempoPalabra', 'tiempoErrPalabra','tamPalabra']]
usPalTiempoPred
Explanation: Usuario, palabra, tiempo
End of explanation
falloCaracterPorPalabra = data.groupby(['usuario','palabra'])['falloCaracter'].sum()
falloCaracterPorPalabra
falloCaracterPorPalabraPred = predict.groupby(['usuario','palabra'])['falloCaracter'].sum()
falloCaracterPorPalabraPred
Explanation: Sacar la suma de fallos totales por palabra
End of explanation
tiempoCoreccionCaracter = data[data['falloCaracter'] > 0].groupby(['usuario','palabra'])['tiempoCaracter'].sum()
tiempoCoreccionCaracter
Explanation: Prueba tiempo correccion caracter
End of explanation
dataFallo = data[data['tiempoErrPalabra'] > 0]
dataFallo[dataFallo['palabra'] == "PZKOFTLILILILI"]
Explanation: Error en el entreno, hay un tiempo negativo, MIRAR
End of explanation
tiempoMedioPalabra = usPalTiempo.drop(['tamPalabra'], axis=1)
tiempoMedioPalabra['user'] = data['usuarioID']
#usPalTiempo2['numPalabra'] = usPalTiempo['palabra']
tiempoMedioPalabra = tiempoMedioPalabra.groupby(['usuario','palabra']).mean()
tiempoMedioPalabra['falloCaracterPorPalabra'] = falloCaracterPorPalabra
targetTM = tiempoMedioPalabra['user']
tiempoMedioPalabra = tiempoMedioPalabra.drop(['user'], axis=1)
tiempoMedioPalabra
tiempoMedioPalabraPred = usPalTiempoPred.drop(['tamPalabra'], axis=1)
tiempoMedioPalabraPred['user'] = predict['usuarioID']
#usPalTiempo2['numPalabra'] = usPalTiempo['palabra']
tiempoMedioPalabraPred = tiempoMedioPalabraPred.groupby(['usuario','palabra']).mean()
tiempoMedioPalabraPred['falloCaracterPorPalabra'] = falloCaracterPorPalabraPred
targetTM = tiempoMedioPalabraPred['user']
tiempoMedioPalabraPred = tiempoMedioPalabraPred.drop(['user'], axis=1)
tiempoMedioPalabraPred
Explanation: Sacar el tiempoPalabra medio de cada palabra del usuario para usarlo como modelo
TiempoErrPalabra no se si es muy util
End of explanation
usPalTiempo3 = usPalTiempo.drop(['palabra'], axis=1)
targetUS = usPalTiempo3['usuario']
usPalTiempo3 = usPalTiempo3.groupby(['usuario']).mean()
#usPalTiempo3['tiempoMedioCaracter'] = usPalTiempo3['tiempoPalabra'] / usPalTiempo3['tamPalabra']
usPalTiempo3
usPalTiempo3['tiempoEnter'] = Enter
usPalTiempo3
data
Explanation: Sacar tiempo medio por caracter por tamaño de palabra
End of explanation
target = data['usuarioID']
target
targetPred = predict['usuarioID']
targetPred
Explanation: Sacar el target
End of explanation
data = data.drop(['usuario','palabraLeida','numPalabra', 'tamPalabra','usuarioID'], axis=1)
predict = predict.drop(['usuario','palabraLeida','numPalabra', 'tamPalabra','usuarioID'], axis=1)
#'palabra', (mirar estos) 'falloCaracter' 'palabraCorrecta', 'hayErrPalabra'
data
Explanation: Eliminar campos sobrantes (Usuario, palabra, palabraLeida, numPalabra, tamPalabra, caracter, usuarioID)
End of explanation
tiempoPorPalabra = data[data['tiempoErrPalabra'] > 0][['palabra','tiempoPalabra', 'tiempoErrPalabra', 'palabraCorrecta']]
tiempoPorPalabra
#data['tiempoPalabra'] = [tiempoPorPalabra['tiempoPalabra'] for tiempoPorPalabra['tiempoPalabra'] in data['tiempoPalabra']]
data2 = data.copy()
data2 = data2.drop(['tiempoPalabra', 'tiempoErrPalabra'], axis=1)
#data2["tiempoPalabra"] = data2["palabra"].map(tiempoPorPalabra)
data2
data
Explanation: Cambiar datos malos por las mejoras
End of explanation
from sklearn.model_selection import cross_val_score
from sklearn.ensemble import RandomForestClassifier
random_forest = RandomForestClassifier(n_estimators=101)
scores = cross_val_score(random_forest, data, target, cv=5)
print(scores)
print(scores.mean())
Explanation: Separar datos de entreno y datos de testeo
Cross Validation
Random Forest
End of explanation
from sklearn.model_selection import cross_val_score
from sklearn import svm
svm = svm.SVC(kernel='linear', C=1)
scores = cross_val_score(svm, data, target, cv=5)
print(scores)
print(scores.mean())
Explanation: SVM
End of explanation
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import cross_val_score
ada = AdaBoostClassifier(n_estimators=100)
scores = cross_val_score(ada, data, target, cv=5)
print(scores)
print(scores.mean())
Explanation: AdaBoost
datos originales
End of explanation
scores = cross_val_score(ada, tiempoMedioPalabra, targetTM, cv=5)
print(scores)
print(scores.mean())
Explanation: Pruebas con otro modelo
End of explanation
scores = cross_val_score(ada, caracter, targerCaracter, cv=5)
print(scores)
print(scores.mean())
Explanation: Pruebas con modelo tiempo medio por caracter
End of explanation
# no se si estaria bien asi ya que caracter tiene el usuario
ada.fit(caracter,targerCaracter)
Explanation: Entreno del modelo caracter, con los datos sin el target
End of explanation
pred = ada.predict(caracterPred)
pred
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(targerCaracterPred, pred)
print(accuracy)
score = ada.score(caracter, caracterPred)
score
caracter.describe()
caracterPred
caracter
Explanation: Prediccion modelo sin Cross Validation
End of explanation |
8,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="4"> MOOC
Step1: 2) Run the following code that computes recursively the state probability vectors $\pi(t)$ at times $t=0,\ldots,100$. The state probability vectors can be computed recursively
Step2: 3) To compute the steady state distribution $\pi^=[\pi^_1,\pi^_2,\pi^_3]$, we must solve the system of load balance equations $\pi^=\pi^ P$ with the normalization condition $\pi^_1+\pi^_2+\pi^_3=1$. The system of equations $\pi^=\pi^ P$ is redundant
Step3: Your answers for the exercise | Python Code:
%matplotlib inline
from pylab import *
P = array([[.7, .3, 0], [.3, .5, .2], [.1, .4, .5]])
def X(x0,P=P,T=100):
# Function X supplies a trajectory of the discrete Markov chain
# with initial state x0 and transition matrix P, till time T
x = [x0]
for _ in range(T):
#####################
# supply the vector p of probabilities to transit to states
# 1,2,3 from the last calculated state
p = P[ x[len(x)-1]-1 ]
#####################
u = rand()
if u<p[0]:
x.append(1)
elif u<p[0]+p[1]:
x.append(2)
else:
x.append(3)
return array(x)
V1 = mean(X(x0=1,T=10**4))
def step(x,y,Tmax=0,color='b'):
# step function
# plots a step function representing the number
# of clients in the system at each instant
if Tmax==0:
Tmax = max(x)
x = append(x,[Tmax]) # number of clients
y = append(y,[y[-1]]) # instants of events
for k in range(len(x)-1):
vlines(x[k+1],y[k],y[k+1],color=color)
hlines(y[k],x[k],x[k+1],color=color)
T = 100
x = X(x0=1)
figure(num=None, figsize=(15, 4))
step(range(T),x)
axis(ymin=0.5,ymax=3.5)
xlabel("Time")
title("Weather")
yticks([1.0,2.0,3.0], ["Clear","Cloudy","Rainy"]);
Explanation: <p><font size="4"> MOOC: Understanding queues</font></p>
<p><font size="4"> Python simulations</p>
<p><font size="4"> Week III: Discrete time Markov chains </p>
In this lab, we consider the Markov chain of the weather forecast example of the course. We check convergence of the probability $\pi(t)$ of the chain at time $t$ to a steady state distribution $\pi^$, independently from the initial distribution $\pi(0)$ of the chain. We solve the load balance equations to get $\pi^$.
Let us consider the Markov chain of the weather forecast example of the course. Recall that its states 1, 2 and 3 represent clear, cloudy and rainy states, and the transition matrix is
$$
P=
\begin{pmatrix}
0.7 & 0.3 & 0\
0.3 & 0.5 & 0.2\
0.1 & 0.4 & 0.5
\end{pmatrix}.
$$
1) Complete below the code of the function that generates trajectories of the Markov chain. The function inputs are the chain initial state $x0$, the transition matrix $P$ and final time index $T$. Its output will be a trajectory $x$ of the chain observed between instants $0$ and $T$. Draw a trajectory of the evolution of the weather between time 0 and time $T=100$.
End of explanation
T = 20
def PI(pi0,P=P,T=T):
# Function PI computes the state probability vectors
# of the Markov chain until time T
pi_ = array([pi0])
for i in range(T):
pi_ = vstack((pi_,pi_[-1] @ P))
return pi_
def plot_PI(x0):
# subplot(1,3,n+1) of successive states probabilities
# with initial state x0
pi_0 = zeros(3)
pi_0[x0-1] = 1
pi_ = PI(pi_0)
subplot(1,3,x0)
plot(pi_)
xlabel('t');axis(ymin=0,ymax=1)
if x0==1: ylabel(r"$\pi(t)$")
if x0==2: title("Evolution of $P(X_t)=1,2,3$.")
rcParams["figure.figsize"] = (10., 4.)
for x0 in range(1,4):
plot_PI(x0)
Explanation: 2) Run the following code that computes recursively the state probability vectors $\pi(t)$ at times $t=0,\ldots,100$. The state probability vectors can be computed recursively : $\pi(t+1)=\pi(t) P$. Check that, when changing the initial state $x0$, $\pi(t)$ still converges rapidly to the same asymptotic vector $\pi^*$ as $t$ increases.
End of explanation
from scipy.linalg import solve
####################
# complete the code to get the steady state distribution
# of the discrete time Markov chain
pi_ = solve([[-.3, .3, 0.1], [.3, -.5, .4], [1, 1, 1]],[0,0,1])
print("steady state distribution: pi* =",pi_)
####################
V2,V3 = pi_[0],pi_[1]
Explanation: 3) To compute the steady state distribution $\pi^=[\pi^_1,\pi^_2,\pi^_3]$, we must solve the system of load balance equations $\pi^=\pi^ P$ with the normalization condition $\pi^_1+\pi^_2+\pi^_3=1$. The system of equations $\pi^=\pi^ P$ is redundant : the third equation is a straightforward linear combination of the first two ones. Taking into account the normalization condition $\pi^_1+\pi^_2+\pi^_3=1$ and discarding the third redundant equation in $\pi^(P-I_3)=0$ yields a full rank system of equations. Complete the code below to solve this system and obtain the steady state ditribution. We will use the solve
function from the scipy.linalg* library.
End of explanation
print("---------------------------\n"
+"RESULTS SUPPLIED FOR LAB 3:\n"
+"---------------------------")
results = ("V"+str(k) for k in range(1,4))
for x in results:
try:
print(x+" = {0:.2f}".format(eval(x)))
except:
print(x+": variable is undefined")
Explanation: Your answers for the exercise
End of explanation |
8,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
"Detection of anomalous tweets using supervising outlier techniques"
Importing the Dependencies and Loading the Data
Step1: Data Preparation
Data prepration with the available data. I made the combination such that the classes are highly imbalanced making it apt for anomaly detection problem
Step2: Data pre-processing - text analytics to create a corpus
1) Converting text to matrix of token counts [Bag of words]
Stemming - lowercasing, removing stop-words, removing punctuation and reducing words to its lexical roots
2) Stemmer, tokenizer(removes non-letters) are created by ourselves.These are passed as parameters to CountVectorizer of sklearn.
3) Extracting important words and using them as input to the classifier
Feature Engineering
Step3: The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
Note
Step4: Fit_Transform
Step5: Train-Test Split
Step6: A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.
1) All the above text processing for features ended up with the same entries in our dataset
2) Instead of having them defined by a whole text, they are now defined by a series of counts of the most frequent words in our whole corpus.
3) These vectors are used as features to train a classifier.
Training the model | Python Code:
import nltk
import pandas as pd
import numpy as np
data = pd.read_csv("original_train_data.csv", header = None,delimiter = "\t", quoting=3,names = ["Polarity","TextFeed"])
#Data Visualization
data.head()
Explanation: "Detection of anomalous tweets using supervising outlier techniques"
Importing the Dependencies and Loading the Data
End of explanation
data_positive = data.loc[data["Polarity"]==1]
data_negative = data.loc[data["Polarity"]==0]
anomaly_data = pd.concat([data_negative.sample(n=10),data_positive,data_negative.sample(n=10)])
anomaly_data.Polarity.value_counts()
#Number of words per sentence
print ("No of words for sentence in train data",np.mean([len(s.split(" ")) for s in anomaly_data.TextFeed]))
Explanation: Data Preparation
Data prepration with the available data. I made the combination such that the classes are highly imbalanced making it apt for anomaly detection problem
End of explanation
import re
from sklearn.feature_extraction.text import CountVectorizer
nltk.download()
from nltk.stem.porter import PorterStemmer
''' this code is taken from
http://www.cs.duke.edu/courses/spring14/compsci290/assignments/lab02.html
'''
# a stemmer widely used
stemmer = PorterStemmer()
def stem_tokens(tokens, stemmer):
stemmed = []
for item in tokens:
stemmed.append(stemmer.stem(item))
return stemmed
def tokenize(text):
# remove non letters
text = re.sub("[^a-zA-Z]", " ", text)
# tokenize
tokens = nltk.word_tokenize(text)
# stem
stems = stem_tokens(tokens, stemmer)
return stems
Explanation: Data pre-processing - text analytics to create a corpus
1) Converting text to matrix of token counts [Bag of words]
Stemming - lowercasing, removing stop-words, removing punctuation and reducing words to its lexical roots
2) Stemmer, tokenizer(removes non-letters) are created by ourselves.These are passed as parameters to CountVectorizer of sklearn.
3) Extracting important words and using them as input to the classifier
Feature Engineering
End of explanation
#Max_Features selected as 80 - can be changed for the better trade-off
vector_data = CountVectorizer(
analyzer = 'word',
tokenizer = tokenize,
lowercase = True,
stop_words = 'english',
max_features = 90
)
Explanation: The below implementation produces a sparse representation of the counts using scipy.sparse.csr_matrix.
Note: I am not using frequencies(TfidTransformer, apt for longer documents) because the text size is small and can be dealt with occurences(CountVectorizer).
End of explanation
#using only the "Text Feed" column to build the features
features = vector_data.fit_transform(anomaly_data.TextFeed.tolist())
#converting the data into the array
features = features.toarray()
features.shape
#printing the words in the vocabulary
vocab = vector_data.get_feature_names()
print (vocab)
# Sum up the counts of each vocabulary word
dist = np.sum(corpus_data_features_nd, axis=0)
# For each, print the vocabulary word and the number of times it
# appears in the data set
a = zip(vocab,dist)
print (list(a))
Explanation: Fit_Transform:
1) Fits the model and learns the vocabulary
2) transoforms the data into feature vectors
End of explanation
from sklearn.cross_validation import train_test_split
#80:20 ratio
X_train, X_test, y_train, y_test = train_test_split(
features,
anomaly_data.Polarity,
train_size=0.80,
random_state=1234)
print ("Training data - positive and negative values")
print (pd.value_counts(pd.Series(y_train)))
print ("Testing data - positive and negative values")
print (pd.value_counts(pd.Series(y_test)))
Explanation: Train-Test Split
End of explanation
from sklearn.svm import SVC
clf = SVC()
clf.fit(X=X_train,y=y_train)
wclf = SVC(class_weight={0: 20})
wclf.fit(X=X_train,y=y_train)
y_pred = clf.predict(X_test)
y_pred_weighted = wclf.predict(X_test)
from sklearn.metrics import classification_report
print ("Basic SVM metrics")
print(classification_report(y_test, y_pred))
print ("Weighted SVM metrics")
print(classification_report(y_test, y_pred_weighted))
from sklearn.metrics import confusion_matrix
print ("Basic SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred))
print ("Weighted SVM Confusion Matrix")
print (confusion_matrix(y_test, y_pred_weighted))
tn, fp, fn, tp = confusion_matrix(y_test, y_pred_weighted).ravel()
(tn, fp, fn, tp)
Explanation: A text polarity depends on what words appear in that text, discarding any grammar or word order but keeping multiplicity.
1) All the above text processing for features ended up with the same entries in our dataset
2) Instead of having them defined by a whole text, they are now defined by a series of counts of the most frequent words in our whole corpus.
3) These vectors are used as features to train a classifier.
Training the model
End of explanation |
8,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Unity ML Agents
Proximal Policy Optimization (PPO)
Contains an implementation of PPO as described here.
Step1: Hyperparameters
Step2: Load the environment
Step3: Train the Agent(s)
Step4: Export the trained Tensorflow graph
Once the model has been trained and saved, we can export it as a .bytes file which Unity can embed. | Python Code:
import numpy as np
import os
import tensorflow as tf
from ppo.history import *
from ppo.models import *
from ppo.trainer import Trainer
from unityagents import *
Explanation: Unity ML Agents
Proximal Policy Optimization (PPO)
Contains an implementation of PPO as described here.
End of explanation
### General parameters
max_steps = 5e5 # Set maximum number of steps to run environment.
run_path = "ppo" # The sub-directory name for model and summary statistics
load_model = False # Whether to load a saved model.
train_model = True # Whether to train the model.
summary_freq = 10000 # Frequency at which to save training statistics.
save_freq = 50000 # Frequency at which to save model.
env_name = "environment" # Name of the training environment file.
### Algorithm-specific parameters for tuning
gamma = 0.99 # Reward discount rate.
lambd = 0.95 # Lambda parameter for GAE.
time_horizon = 2048 # How many steps to collect per agent before adding to buffer.
beta = 1e-3 # Strength of entropy regularization
num_epoch = 5 # Number of gradient descent steps per batch of experiences.
epsilon = 0.2 # Acceptable threshold around ratio of old and new policy probabilities.
buffer_size = 2048 # How large the experience buffer should be before gradient descent.
learning_rate = 3e-4 # Model learning rate.
hidden_units = 64 # Number of units in hidden layer.
batch_size = 64 # How many experiences per gradient descent update step.
Explanation: Hyperparameters
End of explanation
env = UnityEnvironment(file_name=env_name)
print(str(env))
brain_name = env.brain_names[0]
Explanation: Load the environment
End of explanation
tf.reset_default_graph()
# Create the Tensorflow model graph
ppo_model = create_agent_model(env, lr=learning_rate,
h_size=hidden_units, epsilon=epsilon,
beta=beta, max_step=max_steps)
is_continuous = (env.brains[brain_name].action_space_type == "continuous")
use_observations = (env.brains[brain_name].number_observations > 0)
use_states = (env.brains[brain_name].state_space_size > 0)
model_path = './models/{}'.format(run_path)
summary_path = './summaries/{}'.format(run_path)
if not os.path.exists(model_path):
os.makedirs(model_path)
if not os.path.exists(summary_path):
os.makedirs(summary_path)
init = tf.global_variables_initializer()
saver = tf.train.Saver()
with tf.Session() as sess:
# Instantiate model parameters
if load_model:
print('Loading Model...')
ckpt = tf.train.get_checkpoint_state(model_path)
saver.restore(sess, ckpt.model_checkpoint_path)
else:
sess.run(init)
steps = sess.run(ppo_model.global_step)
summary_writer = tf.summary.FileWriter(summary_path)
info = env.reset(train_mode=train_model)[brain_name]
trainer = Trainer(ppo_model, sess, info, is_continuous, use_observations, use_states)
while steps <= max_steps:
if env.global_done:
info = env.reset(train_mode=train_model)[brain_name]
# Decide and take an action
new_info = trainer.take_action(info, env, brain_name)
info = new_info
trainer.process_experiences(info, time_horizon, gamma, lambd)
if len(trainer.training_buffer['actions']) > buffer_size and train_model:
# Perform gradient descent with experience buffer
trainer.update_model(batch_size, num_epoch)
if steps % summary_freq == 0 and steps != 0 and train_model:
# Write training statistics to tensorboard.
trainer.write_summary(summary_writer, steps)
if steps % save_freq == 0 and steps != 0 and train_model:
# Save Tensorflow model
save_model(sess, model_path=model_path, steps=steps, saver=saver)
steps += 1
sess.run(ppo_model.increment_step)
# Final save Tensorflow model
if steps != 0 and train_model:
save_model(sess, model_path=model_path, steps=steps, saver=saver)
env.close()
export_graph(model_path, env_name)
Explanation: Train the Agent(s)
End of explanation
export_graph(model_path, env_name)
Explanation: Export the trained Tensorflow graph
Once the model has been trained and saved, we can export it as a .bytes file which Unity can embed.
End of explanation |
8,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Split oxygen-vacancy defects in Co
We want to work out the symmetry analysis for our split oxygen-vacancy (V-O-V) defects $\alpha$-Co (HCP) and $\beta$-Co (FCC).
The split defects can be represented simply as crowdion interstitial sites, for the purposes of symmetry analysis. We're interested in extracting the tensor expansions around those sites, and (eventually) computing the damping coefficients from the DFT data.
Step1: We need to analyze the geometry of our representative site; we get the position, then find the zero entry in the position vector, and work from there.
Step2: Internal friction resonance. We do loading at a frequency of 1 Hz.
Step3: Temperature where peak maximum is found? | Python Code:
import sys
sys.path.extend(['../'])
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
%matplotlib inline
import onsager.crystal as crystal
import onsager.OnsagerCalc as onsager
from scipy.constants import physical_constants
kB = physical_constants['Boltzmann constant in eV/K'][0]
betaCo = crystal.Crystal.FCC(1.0, 'Co')
print(betaCo)
betaCo.Wyckoffpos(np.array([0.5,0.,0.]))
betaCoO = betaCo.addbasis(betaCo.Wyckoffpos(np.array([0.5,0.,0.])), ['O'])
print(betaCoO)
Ojumpnetwork = betaCoO.jumpnetwork(1,0.5)
Odiffuser = onsager.Interstitial(betaCoO, 1, betaCoO.sitelist(1), Ojumpnetwork)
Explanation: Split oxygen-vacancy defects in Co
We want to work out the symmetry analysis for our split oxygen-vacancy (V-O-V) defects $\alpha$-Co (HCP) and $\beta$-Co (FCC).
The split defects can be represented simply as crowdion interstitial sites, for the purposes of symmetry analysis. We're interested in extracting the tensor expansions around those sites, and (eventually) computing the damping coefficients from the DFT data.
End of explanation
Ppara, Pperp, Pshear = -2.70, -4.30, 0.13
reppos = betaCoO.pos2cart(np.zeros(3), (1, Odiffuser.sitelist[0][0]))
perpindex = [n for n in range(3) if np.isclose(reppos[n], 0)][0]
paraindex = [n for n in range(3) if n != perpindex]
shearsign = 1 if reppos[paraindex[0]]*reppos[paraindex[1]] > 0 else -1
Pdipole = np.diag([Pperp if n == perpindex else Ppara for n in range(3)])
Pdipole[paraindex[0], paraindex[1]] = shearsign*Pshear
Pdipole[paraindex[1], paraindex[0]] = shearsign*Pshear
Pdipole
nu0, Emig = 1e13, 0.91
nsites, njumps = len(Odiffuser.sitelist), len(Odiffuser.jumpnetwork)
betaCoOthermodict = {'pre': np.ones(nsites), 'ene': np.zeros(nsites),
'preT': nu0*np.ones(nsites), 'eneT': Emig*np.ones(nsites)}
beta = 1./(kB*300) # 300K
Llamb = Odiffuser.losstensors(betaCoOthermodict['pre'], beta*betaCoOthermodict['ene'],
[Pdipole],
betaCoOthermodict['preT'], beta*betaCoOthermodict['eneT'])
for (lamb, Ltens) in Llamb:
print(lamb, crystal.FourthRankIsotropic(Ltens))
sh1 = crystal.FourthRankIsotropic(Llamb[0][1])[1]
sh2 = crystal.FourthRankIsotropic(Llamb[1][1])[1]
print(sh2/sh1)
Explanation: We need to analyze the geometry of our representative site; we get the position, then find the zero entry in the position vector, and work from there.
End of explanation
nuIF = 1.
Trange = np.linspace(250,400,151)
shlist = []
for T in Trange:
beta = 1./(kB*T)
Llamb = Odiffuser.losstensors(betaCoOthermodict['pre'], beta*betaCoOthermodict['ene'],
[Pdipole],
betaCoOthermodict['preT'], beta*betaCoOthermodict['eneT'])
f1,L1,f2,L2 = Llamb[0][0], Llamb[0][1], Llamb[1][0], Llamb[1][1]
sh = crystal.FourthRankIsotropic(L1*nuIF*f1/(nuIF**2+f1**2) +
L2*nuIF*f2/(nuIF**2+f2**2))[1]
shlist.append(sh*kB*T)
shear = np.array(shlist)
fig, ax1 = plt.subplots()
ax1.plot(Trange, shear/np.max(shear), 'k')
ax1.set_ylabel('loss $Q$ [unitless]', fontsize='x-large')
ax1.set_xlabel('$T$ [K]', fontsize='x-large')
plt.show()
# plt.savefig('FCC-Co-O-loss.pdf', transparent=True, format='pdf')
Explanation: Internal friction resonance. We do loading at a frequency of 1 Hz.
End of explanation
Trange[np.argmax(shear)]
Explanation: Temperature where peak maximum is found?
End of explanation |
8,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Are categorical variables getting lost in your random forests?
Step1: TL;DR Decision tree models can handle categorical variables without one-hot encoding them. However, popular implementations of decision trees (and random forests) differ as to whether they honor this fact. We show that one-hot encoding can seriously degrade tree-model performance. Our primary comparison is between H2O (which honors categorical variables) and scikit-learn (which requires them to be one-hot encoded).
Unnecessary one-hot encoding
Many real-world datasets include a mix of continuous and categorical variables. The defining property of the latter is that they do not permit a total ordering. A major advantage of decision tree models and their ensemble counterparts, random forests, is that they are able to operate on both continuous and categorical variables directly. In contrast, most other popular models (e.g., generalized linear models, neural networks) must instead transform categorical variables into some numerical analog, usually by one-hot encoding them to create a new dummy variable for each level of the original variable
Step2: Artificial dataset
For our artificial data experiments, we define a categorical variable $c$ which takes values from $C^+$ or $C^-$ and a normally-distributed continuous variable $z \sim N(10, 3^2)$, and specify that
$$
y =
\begin{cases}
1, & \text{if } c\in C^+ \text{ or } z>10 \
0, & \text{otherwise }
\end{cases}
$$
To make this more challenging, we'll create some further continuous variables $x_i$ that have varying correlations with $y$.
Let's generate a dataset with $|C^+| = |C^-| = 100$, $100$ additional weakly correlated features, and $10,000$ samples
Step3: This produces categorical and one-hot-encoded versions of the dataset. Here's a look at the categorical version
Step4: You'll notice that $y=1$ whenever either $c$ starts with A or $z>10$. A simple decision tree should be able to perfectly predict the outcome variable by splitting first on $c$ and then on $z$, as illustrated on the left in the following diagram
Step5: On the left, the $x_{i}$ variables have no contribution to make. Their noisy quality is not disruptive. On the right, however, $x_{i}$ was chosen too early. Individually, these features can be highly informative, but they do not guarantee purity, so choosing them too early can result in branches that remain impure.
Here's the top of the one-hot-encoded version; you can see all the new variables derived from $c$ tacked on as the final set of sparse columns.
Step6: Could a dataset like this arise is practice? Well, perhaps knowing that somebody comes from some subset of states is enough to give good confidence of a positive outcome, but in the remaining states, we would need to consider the value of some second variable. This is the sort of basic relationship encoded in the artificial dataset.
Artificial data bake-off
We'll first focus our discussion on single decision trees to keep things simple, and then extend the results to random forests. We conduct two kinds of analysis
Step7: The performance metric is area under the ROC curve (AUC) which balances the true positive and false positive rates and has a maximum value of 1.0 (perfect classification). A score of 0.73 is respectable but far from stellar. Now let's add $c$ back
Step8: The extra feature led to a modest improvement in AUC, although nowhere near the perfect performance that we were expecting. In addition, none of the $c$-based variables are among the top features as ranked by Gini feature importance
Step9: In fact, all of the $x$-based ones have higher importance than all of the $c$-based ones.
H2O's decision trees
H2O doesn't have a basic decision tree, but rather only random forests. To facilitate a direct comparison with scikit-learn, we wrote a little wrapper class called H2ODecisionTree, which specifies a single-tree forest using all the available features, which is equivalent to a single decision tree. As before, we first evaluate the model without $c$
Step10: Without $c$, the performance is very similar to scikit-learn's. But when H2O has access to $c$, it achieves an almost-perfect AUC
Step11: In stark contrast to the scikit-learn models, the variable $c$ has the largest feature importance, just as our data-generation procedure leads us to expect
Step12: Finally, let's see what happens if we use H2O with the one-hot encoded data
Step13: With one-hot encoding, H2O's performance is about the same as that of scikit-learn.
What's the difference?
To understand what's causing the difference, we need to study the logic of tree-building algorithms.
The tree building algorithm
At the heart of the tree-building algorithm is a subalgorithm that splits the samples into two bins by selecting a variable and a value. This splitting algorithm considers each of the features in turn, and for each feature selects the value of that feature that minimizes the impurity of the bins. We won't get into the details of how this is calculated (and there's more than one way), except to say that you can consider a bin that contains mostly positive or mostly negative samples more pure than one that contains a mixture. There's a nice visualization of the algorithm in the Visual Introduction to Machine Learning.
In our case, we'd hope that, when the algorithm considers $z$, it would choose to split at $10$. That is, any example whose value of $z$ is less than $10$ goes to into one bin, and any whose value is greater than $10$ goes in the other. It should, in turn, further subdivide the samples assigned to the 'less-than' bin, since we know that some of these are in fact positive.
Binary variables are automatically disadvantaged here, since there is only one way to split the samples
Step14: Let's also test a scikit-learn LogisticRegressionCV classifier, to compare a linear classifier with the tree-based ones | Python Code:
__author__ = 'Nick Dingwall, Chris Potts'
Explanation: Are categorical variables getting lost in your random forests?
End of explanation
from tree_categorical_variables import *
Explanation: TL;DR Decision tree models can handle categorical variables without one-hot encoding them. However, popular implementations of decision trees (and random forests) differ as to whether they honor this fact. We show that one-hot encoding can seriously degrade tree-model performance. Our primary comparison is between H2O (which honors categorical variables) and scikit-learn (which requires them to be one-hot encoded).
Unnecessary one-hot encoding
Many real-world datasets include a mix of continuous and categorical variables. The defining property of the latter is that they do not permit a total ordering. A major advantage of decision tree models and their ensemble counterparts, random forests, is that they are able to operate on both continuous and categorical variables directly. In contrast, most other popular models (e.g., generalized linear models, neural networks) must instead transform categorical variables into some numerical analog, usually by one-hot encoding them to create a new dummy variable for each level of the original variable:
$$
\begin{array}{c}
\hline
\textbf{Speciality} \
\hline
\textrm{Cardiology} \
\textrm{Neurology} \
\textrm{Neurology} \
\textrm{Cardiology} \
\textrm{Gerontology} \
\hline
\end{array}
\Longrightarrow
\begin{array}{c c c}
\hline
\textbf{Speciality:Cardiology} & \textbf{Specialty:Neurology} & \textbf{Specialty:Gerontology} \
\hline
1 & 0 & 0 \
0 & 1 & 0 \
0 & 1 & 0 \
1 & 0 & 0 \
0 & 0 & 1 \
\hline
\end{array}
$$
One-hot encoding can lead to a huge increase in the dimensionality of the feature representations. For example, one-hot encoding U.S. states adds 49 dimensions to the intuitive feature representation. In addition, one-hot encoding erases important structure in the underlying representation by splitting a single feature into many separate ones. (The naming convention used above, and by many software packages, can be misleading: the three features on the right are completely separate.)
But one-hot encoding also presents two problems that are more particular to tree-based models:
The resulting sparsity virtually ensures that continuous variables are assigned higher feature importance.
A single level of a categorical variable must meet a very high bar in order to be selected for splitting early in the tree building. This can degrade predictive performance.
This post substantiates both of these points with a comparison between scikit-learn, which presupposes one-hot encoding, and H2O, which does not. We'll do it by constructing an artificial dataset with a known relationship between the features and the target, and explain how these problems arise.
For the most part, this post reports just our technique and central findings, but all of the code is available in tree_categorical_variables.py.
End of explanation
data_categorical, data_onehot = generate_dataset(
num_x=100, n_samples=10000, n_levels=200)
Explanation: Artificial dataset
For our artificial data experiments, we define a categorical variable $c$ which takes values from $C^+$ or $C^-$ and a normally-distributed continuous variable $z \sim N(10, 3^2)$, and specify that
$$
y =
\begin{cases}
1, & \text{if } c\in C^+ \text{ or } z>10 \
0, & \text{otherwise }
\end{cases}
$$
To make this more challenging, we'll create some further continuous variables $x_i$ that have varying correlations with $y$.
Let's generate a dataset with $|C^+| = |C^-| = 100$, $100$ additional weakly correlated features, and $10,000$ samples:
End of explanation
data_categorical.head(10).round(3)
Explanation: This produces categorical and one-hot-encoded versions of the dataset. Here's a look at the categorical version:
End of explanation
from IPython.display import SVG, display
display(SVG("fig/Decision tree visualization.svg"))
Explanation: You'll notice that $y=1$ whenever either $c$ starts with A or $z>10$. A simple decision tree should be able to perfectly predict the outcome variable by splitting first on $c$ and then on $z$, as illustrated on the left in the following diagram:
End of explanation
data_onehot.head(10).round(3)
Explanation: On the left, the $x_{i}$ variables have no contribution to make. Their noisy quality is not disruptive. On the right, however, $x_{i}$ was chosen too early. Individually, these features can be highly informative, but they do not guarantee purity, so choosing them too early can result in branches that remain impure.
Here's the top of the one-hot-encoded version; you can see all the new variables derived from $c$ tacked on as the final set of sparse columns.
End of explanation
results_no_c = evaluate_sklearn_model(
data_onehot,
feature_names=get_feature_names(data_onehot, include_c=False),
target_col='y',
model=DecisionTreeClassifier())
print_auc_mean_std(results_no_c)
Explanation: Could a dataset like this arise is practice? Well, perhaps knowing that somebody comes from some subset of states is enough to give good confidence of a positive outcome, but in the remaining states, we would need to consider the value of some second variable. This is the sort of basic relationship encoded in the artificial dataset.
Artificial data bake-off
We'll first focus our discussion on single decision trees to keep things simple, and then extend the results to random forests. We conduct two kinds of analysis:
A baseline model that doesn't include the categorical variable $c$.
A model that includes $c$.
This allows us to intuitively quantify the value of $c$ for the prediction problem. For each experiment, we'll train and evaluate a tree 10 times and average the results.
Scikit-learn's DecisionTreeClassifier
Scikit-learn can process only the one-hot-encoded version. Here's the baseline evaluation without $c$:
End of explanation
results_with_c = evaluate_sklearn_model(
data_onehot,
feature_names=get_feature_names(data_onehot, include_c=True),
target_col='y',
model=DecisionTreeClassifier())
print_auc_mean_std(results_with_c)
Explanation: The performance metric is area under the ROC curve (AUC) which balances the true positive and false positive rates and has a maximum value of 1.0 (perfect classification). A score of 0.73 is respectable but far from stellar. Now let's add $c$ back:
End of explanation
print_sorted_mean_importances(results_with_c)
Explanation: The extra feature led to a modest improvement in AUC, although nowhere near the perfect performance that we were expecting. In addition, none of the $c$-based variables are among the top features as ranked by Gini feature importance:
End of explanation
h2o_results_no_c = evaluate_h2o_model(
data_categorical,
feature_names=get_feature_names(data_categorical, include_c=False),
target_col='y',
model=H2ODecisionTree())
print_auc_mean_std(h2o_results_no_c)
Explanation: In fact, all of the $x$-based ones have higher importance than all of the $c$-based ones.
H2O's decision trees
H2O doesn't have a basic decision tree, but rather only random forests. To facilitate a direct comparison with scikit-learn, we wrote a little wrapper class called H2ODecisionTree, which specifies a single-tree forest using all the available features, which is equivalent to a single decision tree. As before, we first evaluate the model without $c$:
End of explanation
h2o_results_with_c = evaluate_h2o_model(
data_categorical,
feature_names=get_feature_names(data_categorical, include_c=True),
target_col='y',
model=H2ODecisionTree())
print_auc_mean_std(h2o_results_with_c)
Explanation: Without $c$, the performance is very similar to scikit-learn's. But when H2O has access to $c$, it achieves an almost-perfect AUC:
End of explanation
print_sorted_mean_importances(h2o_results_with_c)
Explanation: In stark contrast to the scikit-learn models, the variable $c$ has the largest feature importance, just as our data-generation procedure leads us to expect:
End of explanation
h2o_results_with_c_onehot = evaluate_h2o_model(
data_onehot,
feature_names=get_feature_names(data_onehot, include_c=True),
target_col='y',
model=H2ODecisionTree())
print_auc_mean_std(h2o_results_with_c_onehot)
Explanation: Finally, let's see what happens if we use H2O with the one-hot encoded data:
End of explanation
sklearn_ensemble_results = evaluate_sklearn_model(
data_onehot,
feature_names=get_feature_names(data_onehot, include_c=True),
target_col='y',
model=RandomForestClassifier(n_estimators=100))
print_auc_mean_std(sklearn_ensemble_results)
h2o_ensemble_results = evaluate_h2o_model(
data_categorical,
feature_names=get_feature_names(data_categorical, include_c=True),
target_col='y',
model=H2ORandomForestEstimator(ntrees=100))
print_auc_mean_std(h2o_ensemble_results)
Explanation: With one-hot encoding, H2O's performance is about the same as that of scikit-learn.
What's the difference?
To understand what's causing the difference, we need to study the logic of tree-building algorithms.
The tree building algorithm
At the heart of the tree-building algorithm is a subalgorithm that splits the samples into two bins by selecting a variable and a value. This splitting algorithm considers each of the features in turn, and for each feature selects the value of that feature that minimizes the impurity of the bins. We won't get into the details of how this is calculated (and there's more than one way), except to say that you can consider a bin that contains mostly positive or mostly negative samples more pure than one that contains a mixture. There's a nice visualization of the algorithm in the Visual Introduction to Machine Learning.
In our case, we'd hope that, when the algorithm considers $z$, it would choose to split at $10$. That is, any example whose value of $z$ is less than $10$ goes to into one bin, and any whose value is greater than $10$ goes in the other. It should, in turn, further subdivide the samples assigned to the 'less-than' bin, since we know that some of these are in fact positive.
Binary variables are automatically disadvantaged here, since there is only one way to split the samples: 0s one way, and 1s the other. Low-cardinality categorical variables suffer from the same problem. Another way to look at it: a continuous variable induces an ordering of the samples, and the algorithm can split that ordered list anywhere. A binary variable can only be split in one place, and a categorical variable with $q$ levels can be split in $\frac{2^{q}}{2} - 1$ ways.
An important sidenote: we don't actually have to search all the partitions because there are efficient algorithms for both binary classification and regression that are guaranteed to find the optimal split in linear time — see page 310 of the Elements of Statistical Learning. No such guarantee exists for multinomial classification, but there is a heuristic.
Why one-hot encoding is bad bad bad for trees
Predictive Performance
By one-hot encoding a categorical variable, we create many binary variables, and from the splitting algorithm's point of view, they're all independent. This means a categorical variable is already disadvantaged over continuous variables. But there's a further problem: these binary variables are sparse. Imagine our categorical variable has 100 levels, each appearing about as often as the others. The best the algorithm can expect to do by splitting on one of its one-hot encoded dummies is to reduce impurity by $\approx 1\%$, since each of the dummies will be 'hot' for around $1\%$ of the samples.
The result of all this is that, if we start by one-hot encoding a high-cardinality variable, the tree building algorithm is unlikely to select one of its dummies as the splitting variable near the root of the tree, instead choosing continuous variables. In datasets like the one we created here, that leads to inferior performance.
In contrast, by considering all of the levels of $c$ at once, H2O's algorithm is able to select $c$ at the very top of the tree.
Interpretability
The importance score assigned to each feature is a measure of how often that feature was selected, and how much of an effect it had in reducing impurity when it was selected. (We don't consider permutation feature importance here; this might help combat the preference for continuous variables over binary ones, but it will not help with the induced sparsity.)
H2O assigns about $70\%$ of its importance to $c$, and the remaining $30\%$ to $z$. Scikit-learn, in contrast, assigns less than $10\%$ in total to the one-hot encodings of $c$, $30\%$ to $z$ and almost $60\%$ collectively to $x_i$, features that are entirely unnecessarily to perfectly classify the data!
Fewer levels, fewer problems
As we discussed, this problem is especially profound for high-cardinality categorical variables. If the categorical variables have few levels, then the induced sparsity is less severe and the one-hot encoded versions have a chance of competing with the continuous ones.
Ensembles and other models
Random forests are simply ensembles of trees where each individual tree is built using a subset of both features and samples. So we'd expect a similar reduction in performance in the scikit-learn ensembles compared to the H2O ensembles. To test this, we train random forest ensembles with 100 trees using each implementation. The difference is once again dramatic:
End of explanation
sklearn_regression_results = evaluate_sklearn_model(
data_onehot,
feature_names=get_feature_names(data_onehot, include_c=True),
target_col='y',
model=LogisticRegressionCV())
print_auc_mean_std(sklearn_regression_results)
Explanation: Let's also test a scikit-learn LogisticRegressionCV classifier, to compare a linear classifier with the tree-based ones:
End of explanation |
8,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Check two SMIRNOFFs have the same typing
This notebook was created to check that a change to a SMIRNOFF force field doesn't change the way it types
molecules. This concern came from switching the decorator 'R' to 'x' so that the SMIRKS in the SMIRNOFF format would be compatible with OEtoolkits and RDKit. See smirnoff99Frosst issue#54
This notebook will
Step3: Relevant methods
Step4: 1. Convert specified SMIRKS frcmod file to SMIRNOFF FFXML
Step5: 2. Load smirnoff99Frosst from current release
This is currently linking to openforcefield/data/test_forcefields/smirnoff99Frosst.offxml which is the current release
Step6: 3. Generate or take in a set of molecules in OpenEye OEMol format
Here we will generate a list of molecules. We will label all molecules given. Here we don't care rather the assigned parameters are generic or not just that the typing doesn't change between the two force fields.
Currently this section expects a relative or absolute path to a single file. The utils.read_molecules function will access /openforcefield/data/molecules/[your path] if there is no file at the relative path.
Step7: 4. Identify any molecules not assigned the same parameters by both force fields
Step8: 5. Visualize mismatches by force type
Chose a force type and then the molecules for each will be displayed
Step9: Extra check for R
Since this notebook was made explicitly for the change from 'R' to 'x' I want to make sure that all of the 'R's were replaced | Python Code:
# Imports
from __future__ import print_function
from convert_frcmod import *
import openeye.oechem as oechem
import openeye.oeiupac as oeiupac
import openeye.oeomega as oeomega
import openeye.oedepict as oedepict
from IPython.display import display
from openff.toolkit.typing.engines.smirnoff.forcefield import *
from openff.toolkit.typing.engines.smirnoff.forcefield_utils import get_molecule_parameterIDs
from openff.toolkit.utils import *
% matplotlib inline
import matplotlib
import numpy as np
import pylab as pl
import matplotlib.pyplot as plt
import time
import IPython
import pickle
import glob
Explanation: Check two SMIRNOFFs have the same typing
This notebook was created to check that a change to a SMIRNOFF force field doesn't change the way it types
molecules. This concern came from switching the decorator 'R' to 'x' so that the SMIRKS in the SMIRNOFF format would be compatible with OEtoolkits and RDKit. See smirnoff99Frosst issue#54
This notebook will:
1. Convert a specified smirks-frcmod file to SMIRNOFF FFXML (this is the test force field)
2. Load the current release of smirnoff99Frosst (this is the reference force field)
3. Generate (or take in) a set of molecules in OpenEye oemol format. Label these molecules with both force fields.
4. Identify molecules where parameter assignment doesn't agree
5. Visualize molecules by force type
Authors:
* Caitlin C. Bannan (UCI)
* functions copied from parameter_usage.ipynb written by David L. Mobley (UCI)
End of explanation
def depictAtomByIdx(mol_copy, atomIdxList, supH = True, width=900, height=500):
mol = oechem.OEMol(mol_copy)
OEGenerate2DCoordinates(mol)
atomBondSet = oechem.OEAtomBondSet()
for atom in mol.GetAtoms():
if atom.GetIdx() in atomIdxList:
atomBondSet.AddAtom( atom)
for bond in atom.GetBonds():
nbrAtom = bond.GetNbr(atom)
nbrIdx = nbrAtom.GetIdx()
if (nbrIdx in atomIdxList) and nbrIdx>atom.GetIdx():
atomBondSet.AddBond( bond)
from IPython.display import Image
dopt = oedepict.OEPrepareDepictionOptions()
dopt.SetDepictOrientation( oedepict.OEDepictOrientation_Horizontal)
dopt.SetSuppressHydrogens(supH)
oedepict.OEPrepareDepiction(mol, dopt)
opts = oedepict.OE2DMolDisplayOptions(width, height, oedepict.OEScale_AutoScale)
disp = oedepict.OE2DMolDisplay(mol, opts)
aroStyle = oedepict.OEHighlightStyle_Color
aroColor = oechem.OEColor(oechem.OEGrey)
oedepict.OEAddHighlighting(disp, aroColor, aroStyle,
oechem.OEIsAromaticAtom(), oechem.OEIsAromaticBond() )
hstyle = oedepict.OEHighlightStyle_BallAndStick
hcolor = oechem.OEColor(oechem.OELightGreen)
oedepict.OEAddHighlighting(disp, hcolor, hstyle, atomBondSet)
#ofs = oechem.oeosstream()
img = oedepict.OEImage(width, height)
oedepict.OERenderMolecule(img, disp)
#oedepict.OERenderMolecule(ofs, 'png', disp)
#ofs.flush()
#return Image(data = "".join(ofs.str()))
return Image(oedepict.OEWriteImageToString("png",img))
def getMolParamIDToAtomIndex( oemol, ff):
Take an OEMol and a SMIRNOFF force field object and return a dictionary,
keyed by parameter ID, where each entry is a tuple of
( smirks, [[atom1, ... atomN], [atom1, ... atomN]) giving the SMIRKS
corresponding to that parameter ID and a list of the atom groups in that
molecule that parameter is applied to.
Parameters
----------
oemol : OEMol
OpenEye OEMol with the molecule to investigate.
ff : ForceField
SMIRNOFF ForceField object (obtained from an ffxml via ForceField(ffxml)) containing FF of interest.
Returns
-------
param_usage : dictionary
Dictionary, keyed by parameter ID, where each entry is a tuple of
( smirks, [[atom1, ... atomN], [atom1, ... atomN]) giving the SMIRKS
corresponding to that parameter ID and a list of the atom groups in
that molecule that parameter is applied to.
labels = ff.labelMolecules([oemol])
param_usage = {}
for mol_entry in range(len(labels)):
for force in labels[mol_entry].keys():
for (atom_indices, pid, smirks) in labels[mol_entry][force]:
if not pid in param_usage:
param_usage[pid] = (smirks, [atom_indices])
else:
param_usage[pid][1].append( atom_indices )
return param_usage
def labels_to_pidDict(labels):
This method takes a set of SMIRNOFF force field labels and returns
a dictionary with information for each molecule at each force type
in the form:
{ force_type: {mol_index: {(indice tuple): pid, ...}, ... } }
force_type_dict = dict()
for idx, mol_dict in enumerate(labels):
for force_type, label_set in mol_dict.items():
if not force_type in force_type_dict:
force_type_dict[force_type] = dict()
force_type_dict[force_type][idx] = dict()
for (indices, pid, smirks) in label_set:
force_type_dict[force_type][idx][tuple(indices)] = {'pid': pid, 'smirks':smirks}
return force_type_dict
Explanation: Relevant methods
End of explanation
# Input and output info
#infile = 'example.frcmod' # smirnoffish frcmod file to convert
infile = 'smirnoffishFrcmod.parm99Frosst.txt' # smirnoffish frcmod file to convert
ffxmlFile = 'smirnoff99FrosstFrcmod.offxml'
template = 'template.offxml' # Template FFXML file without parameters (but with remainder of contents)
# Convert
# Already converted
convert_frcmod_to_ffxml( infile, template, ffxmlFile)
# Load SMIRNOFF FFXML
test_ff = ForceField(ffxmlFile) # We will use this below to access details of parameters
Explanation: 1. Convert specified SMIRKS frcmod file to SMIRNOFF FFXML
End of explanation
ref_ff = ForceField('test_forcefields/smirnoff99Frosst.offxml')
Explanation: 2. Load smirnoff99Frosst from current release
This is currently linking to openforcefield/data/test_forcefields/smirnoff99Frosst.offxml which is the current release
End of explanation
molecule_file = "DrugBank_tripos.mol2"
molecules = utils.read_molecules(molecule_file)
init = time.time()
test_labels = test_ff.labelMolecules(molecules)
ref_labels = ref_ff.labelMolecules(molecules)
t = (time.time() - init) / 60.0
print("Typed %i molecules with test and reference force fields in %.2f minutes" % (len(molecules), t))
Explanation: 3. Generate or take in a set of molecules in OpenEye OEMol format
Here we will generate a list of molecules. We will label all molecules given. Here we don't care rather the assigned parameters are generic or not just that the typing doesn't change between the two force fields.
Currently this section expects a relative or absolute path to a single file. The utils.read_molecules function will access /openforcefield/data/molecules/[your path] if there is no file at the relative path.
End of explanation
# Make dictionary by molecule and tuple indices
init = time.time()
test_dict = labels_to_pidDict(test_labels)
ref_dict = labels_to_pidDict(ref_labels)
t = (time.time() - init) / 60.0
print("created indices tuple to pid dictionaries in %.2f minutes" % t)
# Make a dictionary to store mismatches:
mismatch = dict()
# This will have embedded dictionaries with this form:
# force_type: {mol_idx:{(index tuple): {test_pid, test_smirks, ref_pid, ref_smirks}}}
mismatch_count = dict()
# loop through force types
for force_type, test_mol_dict in test_dict.items():
if force_type not in mismatch:
mismatch[force_type] = dict()
if force_type not in mismatch_count:
mismatch_count[force_type] = 0
# loop through molecules in each force type
for mol_idx, test_tuple_dict in test_mol_dict.items():
if not mol_idx in mismatch[force_type]:
mismatch[force_type][mol_idx] = dict()
# loop through all atom indice tuples in this molecule
for indice_tuple, test_info in test_tuple_dict.items():
# compare pid assignment
test_pid = test_info['pid']
ref_pid = ref_dict[force_type][mol_idx][indice_tuple]['pid']
# if they don't match store info in mismatch dictionary and update count
if test_pid != ref_pid:
test_smirks = test_info['smirks']
ref_smirks = ref_dict[force_type][mol_idx][indice_tuple]['smirks']
mismatch[force_type][mol_idx][indice_tuple] = {'test_pid': test_pid, 'test_smirks': test_smirks,
'ref_pid': ref_pid, 'ref_smirks': ref_smirks}
mismatch_count[force_type] +=1
print("%-35s %s" % ("Force Type", "Number mismatches"))
print("-"*55)
for force_type, count in mismatch_count.items():
print("%-35s %i" % (force_type, count))
Explanation: 4. Identify any molecules not assigned the same parameters by both force fields
End of explanation
ForceType = "PeriodicTorsionGenerator"
for mol_idx, tuple_dict in mismatch[ForceType].items():
# only visualize molecules with mismatch indices
keys = [k for k in tuple_dict.keys()]
if len(keys) == 0:
continue
mol = OEMol(molecules[mol_idx])
print("Looking at molecule %i" % mol_idx)
for indice_tuple, pid_info in tuple_dict.items():
test_pid = pid_info['test_pid']
test_smirks = pid_info['test_smirks']
ref_pid = pid_info['ref_pid']
ref_smirks = pid_info['ref_smirks']
print("%-10s %-40s %-40s" % ('', 'test force field', 'reference force field'))
print("%-10s %-40s %-40s" % ('pid'))
print("%-10s %-30s %-10s %-30s" % (test_pid, test_smirks, ref_pid, ref_smirks))
display(depictAtomByIdx(mol, indice_tuple, supH = False))
print("\n")
print("\n")
print("-"*100)
print("\n")
Explanation: 5. Visualize mismatches by force type
Chose a force type and then the molecules for each will be displayed
End of explanation
# loop through force types
for force_type, test_mol_dict in test_dict.items():
# loop through molecules in each force type
for mol_idx, test_tuple_dict in test_mol_dict.items():
# loop through all atom indice tuples in this molecule
for indice_tuple, test_info in test_tuple_dict.items():
# compare pid assignment
test_pid = test_info['pid']
test_smirks = test_info['smirks']
# Check for 'R'
if 'R' in test_smirks:
print("Found 'R' in %s (%s)" % )
Explanation: Extra check for R
Since this notebook was made explicitly for the change from 'R' to 'x' I want to make sure that all of the 'R's were replaced
End of explanation |
8,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
IA369Z - Reprodutibilidade em Pesquisa Computacional.
Descrição de códigos para devices e coletas
Code Client Device
ESP8266 Runing program language LUA.
Step1: Server Local
Step2: Export database from dashaboard about device IoT
Arquivo csv | Python Code:
-- Campainha IoT - LHC - v1.1
-- ESP Inicializa pinos, Configura e Conecta no Wifi, Cria conexão TCP
-- e na resposta de um "Tocou" coloca o ESP em modo DeepSleep para economizar bateria.
-- Se nenhuma resposta for recebida em 15 segundos coloca o ESP em DeepSleep.
led_pin = 3
status_led = gpio.LOW
ip_servidor = "192.168.1.10"
ip_campainha = "192.168.1.20"
voltagem=3333
function desliga_circuito()
print("Colocando ESP em Deep Sleep")
node.dsleep(0)
end
function read_voltage()
-- Desconecta do wifi para poder ler a voltagem de alimentação do ESP.
wifi.sta.disconnect()
voltagem = adc.readvdd33()
print("Voltagem: "..voltagem)
-- Inicializa o Wifi e conecta no servidor
print("Inicializando WiFi")
init_wifi()
end
function pisca_led()
gpio.write(led_pin, status_led)
if status_led == gpio.LOW then
status_led = gpio.HIGH
else
status_led = gpio.LOW
end
end
function init_pins()
gpio.mode(led_pin, gpio.OUTPUT)
gpio.write(led_pin, status_led)
end
function init_wifi()
wifi.setmode(wifi.STATION)
wifi.sta.config("SSID", "password")
wifi.sta.connect()
wifi.sta.setip({ip=ip_campainha,netmask="255.255.255.0",gateway="192.168.1.1"})
-- Aguarda conexão com Wifi antes de enviar o request.
function try_connect()
if (wifi.sta.status() == 5) then
tmr.stop(0)
print("Conectado, mandando request")
manda_request()
-- Se nenhuma confirmação for recebida em 15 segundos, desliga o ESP.
tmr.alarm(2,15000,0, desliga_circuito)
else
print("Conectando...")
end
end
tmr.alarm(0,1000,1, function() try_connect() end )
end
function manda_request()
tmr.alarm(1, 200, 1, pisca_led)
print("Request enviado")
-- Cria a conexão TCP
conn=net.createConnection(net.TCP,false)
-- Envia o toque da campainha e voltagem para o servidor
conn:on("connection", function(conn)
conn:send("GET /?bateria=" ..voltagem.. " HTTP/1.0\r\n\r\n")
end)
-- Se receber "Tocou" do servidor, desliga o ESP.
conn:on("receive", function(conn, data)
if data:find("Tocou") ~= nil then
desliga_circuito()
end
end)
-- Conectar no servidor
conn:connect(9999,ip_servidor)
end
print("Inicializando pinos")
init_pins()
print ("Lendo voltagem")
read_voltage()
Explanation: IA369Z - Reprodutibilidade em Pesquisa Computacional.
Descrição de códigos para devices e coletas
Code Client Device
ESP8266 Runing program language LUA.
End of explanation
# !/usr/bin/python2
import time
import BaseHTTPServer
import os
import random
import string
import requests
from urlparse import parse_qs, urlparse
HOST_NAME = '0.0.0.0'
PORT_NUMBER = 9999
# A variável MP3_DIR será construida tendo como base o diretório HOME do usuário + Music/Campainha
# (e.g: /home/usuario/Music/Campainha)
MP3_DIR = os.path.join(os.getenv('HOME'), 'Music', 'Campainha')
VALID_CHARS = set(string.ascii_letters + string.digits + '_.')
CHAVE_THINGSPEAK = 'XYZ11ZYX99XYZ1XX'
# Salva o arquivo de log no diretório do usuário (e.g: /home/usuário/campainha.log)
ARQUIVO_LOG = os.path.join(os.getenv('HOME'), 'campainha.log')
def filtra(mp3):
if not mp3.endswith('.mp3'):
return False
for c in mp3:
if not c in VALID_CHARS:
return False
return True
def log(msg, output_file=None):
if output_file is None:
output_file = open(ARQUIVO_LOG, 'a')
output_file.write('%s: %s\n' % (time.asctime(), msg))
output_file.flush()
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_GET(s):
s.send_header("Content-type", "text/plain")
query = urlparse(s.path).query
if not query:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
components = dict(qc.split('=') for qc in query.split('&'))
if not 'bateria' in components:
s.send_response(404)
s.end_headers()
s.wfile.write('Not found')
return
s.send_response(200)
s.end_headers()
s.wfile.write('Tocou')
s.wfile.flush()
log("Atualizando thingspeak")
r = requests.post('https://api.thingspeak.com/update',
data={'api_key': CHAVE_THINGSPEAK, 'field1': components['bateria']})
log("Thingspeak retornou: %d" % r.status_code)
log("Tocando MP3")
mp3s = [f for f in os.listdir(MP3_DIR) if filtra(f)]
mp3 = random.choice(mp3s)
os.system("mpv " + os.path.join(MP3_DIR, mp3))
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
log("Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER))
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
log("Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER))
Explanation: Server Local : Runing soun local area.
Program Python
End of explanation
import numpy as np
import csv
with open('database.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='|')
for row in spamreader:
print ', '.join(row)
Explanation: Export database from dashaboard about device IoT
Arquivo csv
End of explanation |
8,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2A.i - Données non structurées, programmation fonctionnelle
Une table dans une base de données est déjà le résultat d'une réflexion sur la façon de les représenter.
Step1: Avant-propos
Step2: Il y a un rapport de 10 dans le temps d'exécution entre la méthode "user defined" et la méthode "builtin".
On retrouve ce même rapport entre la méthode "builtin" et les méthodes numpy.
On peut noter que mélanger les objets numpy et non-numpy donne de très mauvais résultats.
Dans le cas de la programmation fonctionnelle, nous nous situerons plutôt dans le cas "builtin"
Step3: Le code suivant va lire plusieurs gigaoctets de données, et la consommation maximale de mémoire du process ne va augmenter que de quelques Mo. De plus ce code manipule des dictionnaires qu'il serait compliqué de faire rentrer dans un DataFrame pandas.
Step4: Dans le cadre du TP d'aujourd'hui, les données que nous allons utiliser peuvent largement tenir en mémoire, et de façon générale, lorsqu'on développe des codes pour gérer des gros volumes de données, on les teste sur des volumes de données qui tiennent en mémoire.
Dans le cadre de la gestion de volume important de données, on ne pourra donc pas stocker des résultats intermédiaire, on va donc composer des fonctions pour qu'elles produisent directement le résultat final.
Cas classique
Step5: Le NoSql / json permet donc une alternative au schéma classique suivant
Step6: mongodb (pymongo) lui ,ne connait pas de colonnes, que des documents, dont le format est analogue à un objet json.
Cela se traduit par une très grande simplicité, pas besoin de déclarer les tables, ni les bases de données ...
Step7: Par contre certaines syntaxes usuelles en sql, ici le groupby, ont une écriture nettement plus complexes en mongodb.
Step8: Mon retour
Step9: Et pandas ?
Pandas attend lui des données plus ou moins structurées. Vous pouvez charger une base sous pandans avec la syntaxe suivante | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
Explanation: 2A.i - Données non structurées, programmation fonctionnelle
Une table dans une base de données est déjà le résultat d'une réflexion sur la façon de les représenter.
End of explanation
import pyensae.datasource
pyensae.datasource.download_data("twitter_for_network_100000.db.zip")
import numpy as np
def my_sum(l):
res = 0
for it in l:
res += it
return res
l = list(range(100000))
a = np.arange(100000)
print("User defined method or cross-method")
%timeit my_sum(a) # user defined with numpy array
%timeit sum(a) # built-in with numpy array
%timeit np.sum(l) # numpy function with list
%timeit my_sum(l) # user definedwith list
print("Builtin function")
%timeit sum(l) # built-in with list
print("Numpy function")
%timeit np.sum(a) # numpy function
%timeit a.sum() # numpy method
Explanation: Avant-propos : programmation fonctionnelle ou numpy ?
toolz/cytoolz : programmation fonctionnelle
numpy : calcul matriciel
Données : twitter_for_network_100000.db.zip or twitter_for_network_100000.db.zip (xavierdupre.fr).
End of explanation
import os, psutil, gc, sys
if not sys.platform.startswith("win"):
import resource
def memory_usage_psutil():
gc.collect()
process = psutil.Process(os.getpid())
mem = process.memory_info()[0] / float(2 ** 20)
print( "Memory used : %i MB" % mem )
if not sys.platform.startswith("win"):
print( "Max memory usage : %i MB" % (resource.getrusage(resource.RUSAGE_SELF).ru_maxrss//1024) )
memory_usage_psutil()
import cytoolz as ct # import groupby, valmap, compose
import cytoolz.curried as ctc ## pipe, map, filter, get
import sqlite3
import pprint
try:
import ujson as json
except:
print("ujson not available")
import json
conn_sqlite = sqlite3.connect("twitter_for_network_100000.db")
cursor_sqlite = conn_sqlite.cursor()
Explanation: Il y a un rapport de 10 dans le temps d'exécution entre la méthode "user defined" et la méthode "builtin".
On retrouve ce même rapport entre la méthode "builtin" et les méthodes numpy.
On peut noter que mélanger les objets numpy et non-numpy donne de très mauvais résultats.
Dans le cas de la programmation fonctionnelle, nous nous situerons plutôt dans le cas "builtin" :
<table><thead><tr><th>Project</th><th>Computation</th><th>Data Structures</th></tr></thead><tbody><tr><td>Code de l'utilisateur</td><td>Python</td><td>Python</td></tr><tr><td>CyToolz</td><td>C</td><td>Python</td></tr><tr><td>Pandas/NumPy</td><td>C</td><td>C</td></tr></tbody></table>
Dans le cas de manipulation de donnée structurée de taille "raisonnable", pandas et numpy restent plus performants. Ils peuvent toutefois être limités par plusieurs points :
manipulation de données plus complexes avec des sous-listes, des champs manquants
ils sont construits sur le principe de chargement en mémoire des données
End of explanation
cursor_sqlite.execute('SELECT content FROM tw_users' )
object_to_sum = ctc.pluck( "followers_count", ctc.map( json.loads, ctc.pluck( 0, cursor_sqlite ) ) )
print(sum(object_to_sum))
memory_usage_psutil()
Explanation: Le code suivant va lire plusieurs gigaoctets de données, et la consommation maximale de mémoire du process ne va augmenter que de quelques Mo. De plus ce code manipule des dictionnaires qu'il serait compliqué de faire rentrer dans un DataFrame pandas.
End of explanation
import pprint
cursor_sqlite.execute('SELECT content FROM tw_users LIMIT 1')
user = cursor_sqlite.fetchone()[0]
print("#"*15 + " user raw json " + "#"*15)
print( user )
print("#"*15 + " user as python dict " + "#"*15)
pprint.pprint( json.loads( user ) )
cursor_sqlite.execute('SELECT content FROM tw_status LIMIT 1')
print("#"*15 + " status as python dict " + "#"*15)
pprint.pprint( json.loads( cursor_sqlite.fetchone()[0] ) )
Explanation: Dans le cadre du TP d'aujourd'hui, les données que nous allons utiliser peuvent largement tenir en mémoire, et de façon générale, lorsqu'on développe des codes pour gérer des gros volumes de données, on les teste sur des volumes de données qui tiennent en mémoire.
Dans le cadre de la gestion de volume important de données, on ne pourra donc pas stocker des résultats intermédiaire, on va donc composer des fonctions pour qu'elles produisent directement le résultat final.
Cas classique :
resultat_intermediaire_1 = f( donnees )
resultat_intermediaire_2 = g( resultat_intermediaire_1 )
resultat_final = h( resultat_intermediaire_2 )
Programmation fonctionnelle :
resultat_final = h( g( f( donnees ) ) )
Données structurées SQL et NOSQL
SQL signifie Structured Query Language, les tables ont un nombre de colonnes fixes, et chaque colonne possède un type particulier.
Comment gère-t-on un nombre d'objets variables ? La plupart du temps avec une table secondaire qui contiendra une ligne par élément de la liste.
Par exemple, si l'on veut stocker la liste des films possédés par une personne.
<table><tbody><tr><td><h5>Table person</h5></td><td><table><thead><tr><th>Id</th><th>Name</th></tr></thead><tbody><tr><td>1</td><td>Jean</td></tr><tr><td>2</td><td>Paul</td></tr><tr><td>3</td><td>Jacques</td></tr></tbody></table></td><td><h5>Table related_items</h5></td><td><table><thead><tr><th>Id</th><th>Person_id</th><th>Value</th></tr></thead><tbody><tr><td>1</td><td>1</td><td>Star wars</td></tr><tr><td>2</td><td>1</td><td>Cyrano</td></tr><tr><td>3</td><td>1</td><td>Lord of the rings</td></tr><tr><td>4</td><td>2</td><td>Mad max</td></tr><tr><td>5</td><td>2</td><td>Dr Horrible</td></tr></tbody></table></td></tbody></table>
Ce système est très "structuré" (comme son nom l'indique) et peut s'avérer assez lourd si l'on a affaire à des données moins bien structurées, avec beaucoup de listes, des données présentes ou non.
On voit donc se développer de plus en plus des systèmes alternatifs, dit NoSQL (pour Not Only Sql).
Nous allons en voir trois :
Sqlite3 support for Json
mongodb
PostGreSql
Ils sont analogues sur la nature des données stockées, elles le sont au format Json.
Json ? Qu'est ce que c'est ?
Il s'agit du format majoritaire pour les API informatiques internet.
Par exemple les données renvoyées par twitter (sur lesquelles nous travaillerons aujourd'hui) sont sous ce format.
Il signifie : JavaScript Object Notation.
Il se base essentiellement sur des dictionnaires (association clé/valeur) et des listes.
Il est très proche des objets python (modulo les false/False)
End of explanation
try:
import psycopg2
from psycopg2.extras import Json
postgre_ok = True
except ImportError:
postgre_ok = False
if postgre_ok:
db_name = 'cours_ensae'
conn_string = "host='localhost' dbname='{0}' user='python' password='kyojin'".format( db_name )
try:
conn_psql = psycopg2.connect(conn_string)
cursor_psql = conn_psql.cursor()
postgre_ok = True
except psycopg2.OperationalError:
postgre_ok = False
if postgre_ok:
conn_psql.server_version
if postgre_ok:
conn_psql.rollback()
if postgre_ok:
def get_data_sql(doc_id):
cursor_psql.execute("SELECT id, company FROM document WHERE id = %s", (doc_id,))
res_1 = cursor_psql.fetchone()
cursor_psql.execute("SELECT id FROM ticket WHERE document_id = %s ORDER BY id", (doc_id,))
res_2 = cursor_psql.fetchall()
tickets_id = [it[0] for it in res_2 ]
cursor_psql.execute("SELECT id FROM coupon WHERE ticket_id = ANY( %s ) ORDER BY id", (tickets_id,))
res_3 = cursor_psql.fetchall()
return res_1 + (res_2,) + (res_3,)
%timeit get_data_sql(10000)
get_data_sql(10000)
if postgre_ok:
def get_data_sql_join(doc_id):
cursor_psql.execute("SELECT d.id, d.company, t.id, c.id FROM document as d \
JOIN ticket as t on d.id = t.document_id \
JOIN coupon as c on t.id = c.ticket_id \
WHERE d.id = %s", (doc_id,))
return cursor_psql.fetchall()
%timeit get_data_sql_join(10000)
get_data_sql_join(10000)
if postgre_ok:
def get_data_nosql(doc_id):
cursor_psql.execute("SELECT id, company, content FROM document_nosql WHERE id = %s", (doc_id,))
return cursor_psql.fetchone()
%timeit get_data_nosql(10000)
get_data_nosql(10000)
Explanation: Le NoSql / json permet donc une alternative au schéma classique suivant :
<table><tbody><tr><td><h5>Table person</h5></td><td><table><thead><tr><th>Id</th><th>Name</th></tr></thead><tbody><tr><td>1</td><td>Jean</td></tr><tr><td>2</td><td>Paul</td></tr><tr><td>3</td><td>Jacques</td></tr></tbody></table></td><td><h5>Table related_items</h5></td><td><table><thead><tr><th>Id</th><th>Person_id</th><th>Value</th></tr></thead><tbody><tr><td>1</td><td>1</td><td>Star wars</td></tr><tr><td>2</td><td>1</td><td>Cyrano</td></tr><tr><td>3</td><td>1</td><td>Lord of the rings</td></tr><tr><td>4</td><td>2</td><td>Mad max</td></tr><tr><td>5</td><td>2</td><td>Dr Horrible</td></tr></tbody></table></td></tbody></table>
Qui serait :
<table><tbody><tr><td><h5>Table person_with_items</h5></td><td><table><thead><tr><th>Id</th><th>Name</th><th>Item_list</th></tr></thead><tbody><tr><td>1</td><td>Jean</td><td>['Star wars', 'Cyrano', 'Lord of the rings']</td></tr><tr><td>2</td><td>Paul</td><td>['Mad max', 'Dr Horrible']</td></tr><tr><td>3</td><td>Jacques</td><td></td></tr></tbody></table></td></tr></tbody></table>
Cette dernière structure serait vraiment un exemple de nosql dans son sens de Not Only Sql, il y a un mixe de données structurées et non structurées.
Il y a également certaines base de données où il n'y a plus du tout de structure, comme mongodb, qui est qualifiée de document-oriented.
<table><tbody><tr><td><h5>Table person_with_items</h5></td><td><table><tbody><tr><td>{'Id': 1, 'Name': 'Jean', 'Item_list' : ['Star wars', 'Cyrano', 'Lord of the rings']}</td></tr><tr><td>{'Id': 2, 'Name': 'Paul', 'Item_list' : ['Mad max', 'Dr Horrible']}</td></tr><tr><td>{'Id': 3, 'Name': 'Jacques', 'Item_list' : []}</td></tr></tbody></table></td></tr></tbody></table>
Il faut toutefois remarquer que cette dernière structure au moins deux inconvénients par rapport à une structure Sql avec une sous-table :
vous ne pouvez pas accéder directement aux objets de 'Item_list' sans passer par la table person
les informations de Item_list ne peuvent être partagées entre plusieurs objets, on peut donc avoir à restocker les informations
Voir module psycopg.
End of explanation
mongo = False
if mongo:
import pymongo
mongo_client = pymongo.MongoClient( 'localhost', 27017 )
mongo_db = mongo_client.ensae_db
mongo_db.table_for_ensae.delete_many( {} )
mongo_db.table_for_ensae.insert_one( {'nom' : 'Martin', 'prenom' : 'Nicolas', 'grades': [20,18,7,12]} )
mongo_db.table_for_ensae.insert_one( {'nom' : 'Dupont', 'prenom' : 'Jean', 'grades': [11,5,7,12]} )
mongo_db.table_for_ensae.insert_one( {'nom' : 'Martin', 'prenom' : 'Gilles', 'grades': [10,10,10,10]} )
user = mongo_db.table_for_ensae.find_one( {'nom' : 'Dupont'} )
user_list = mongo_db.table_for_ensae.find( {} )
_ = list(map( pprint.pprint, user_list ))
Explanation: mongodb (pymongo) lui ,ne connait pas de colonnes, que des documents, dont le format est analogue à un objet json.
Cela se traduit par une très grande simplicité, pas besoin de déclarer les tables, ni les bases de données ...
End of explanation
if mongo:
result = mongo_db.table_for_ensae.group(['nom'],
None,
{'list': []}, # initial
'function(obj, prev) {prev.list.push(obj)}')
pprint.pprint( result )
Explanation: Par contre certaines syntaxes usuelles en sql, ici le groupby, ont une écriture nettement plus complexes en mongodb.
End of explanation
cursor_sqlite.execute("SELECT content FROM tw_users LIMIT 10000" )
with open("tw_users.json", 'w') as f:
for it_user in cursor_sqlite:
f.write(it_user[0])
f.write("\n")
with open("tw_users.json", 'r') as f:
nb_total_followers = 0
for it_user in f:
nb_total_followers += json.loads( it_user )["followers_count"]
print( nb_total_followers )
Explanation: Mon retour :
mongodb malgré sa simplicité d'utilisation peut être très gourmand en ressources (consommation d'espace disque et/ou mémoire 15 fois supérieure à PostGreSql pour les mêmes données). Je la déconseille pour une application personnelle.
Sqlite3 est plus archaïque, mais le fait d'avoir une base de donnée contenue dans un fichier est très pratique pour certains usages (déploiement chez un client ou pour des élèves)
PostGreSql me semble le plus robuste pour un usage en serveur personnel.
Et les fichiers plats ?
Vous pouvez tout à fait utiliser des fichiers plats.
Ils offrent beaucoup de simplicités d'utilisation.
Ils auront des performances potentiellement très proches pour une lecture complète.
End of explanation
import pandas as pd
df = pd.read_sql( "SELECT id, screen_name from tw_users", conn_sqlite )
print( df.head() )
print( df.shape )
Explanation: Et pandas ?
Pandas attend lui des données plus ou moins structurées. Vous pouvez charger une base sous pandans avec la syntaxe suivante :
End of explanation |
8,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
Step1: Well, we accidentally misclassified one item, putting a thing 2 into the thing 3 pile. 98.7% accuracy seems pretty good to me, though.
2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
Step2: We still only missed one, making the exact same mistake. There must be a heavily overlapping area of these two species that confuses the model. Either way, we got the same number of errors with twice the training data, which seems pretty bad. I wonder how much we can cut it before it makes a difference in accuracy?
Step3: Weird. Small dataset, I guess?
3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here
Step4: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
Step5: Going to be honest, I barely have any idea what I'm looking at here. I believe we are trying to predict malignancy of cancer cells, but my understanding of the process to get here is such that I really just copied what we did in class and applied it here. I couldn't say that I used all 10 of the variables described by the dataset or that I just applied the diagnosis to itself as a predictor. This is about as clear as mud to me currently. Maybe a visual tree will help. | Python Code:
from sklearn import datasets
from sklearn import tree
from sklearn.cross_validation import train_test_split
from sklearn import metrics
import numpy as np
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confussion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print("Accuracy:{0:.3f}".format(metrics.accuracy_score(y, y_pred)),"\n")
if show_classification_report:
print("Classification report")
print(metrics.classification_report(y,y_pred),"\n")
if show_confussion_matrix:
print("Confusion matrix")
print(metrics.confusion_matrix(y,y_pred),"\n")
iris = datasets.load_iris()
x = iris.data[:,2:]
y = iris.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.50,train_size=0.50)
measure_performance(x_test,y_test,dt)
measure_performance(x_train,y_train,dt)
Explanation: We covered a lot of information today and I'd like you to practice developing classification trees on your own. For each exercise, work through the problem, determine the result, and provide the requested interpretation in comments along with the code. The point is to build classifiers, not necessarily good classifiers (that will hopefully come later)
1. Load the iris dataset and create a holdout set that is 50% of the data (50% in training and 50% in test). Output the results (don't worry about creating the tree visual unless you'd like to) and discuss them briefly (are they good or not?)
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
measure_performance(x_train,y_train,dt)
Explanation: Well, we accidentally misclassified one item, putting a thing 2 into the thing 3 pile. 98.7% accuracy seems pretty good to me, though.
2. Redo the model with a 75% - 25% training/test split and compare the results. Are they better or worse than before? Discuss why this may be.
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.95,train_size=0.05)
measure_performance(x_test,y_test, dt)
Explanation: We still only missed one, making the exact same mistake. There must be a heavily overlapping area of these two species that confuses the model. Either way, we got the same number of errors with twice the training data, which seems pretty bad. I wonder how much we can cut it before it makes a difference in accuracy?
End of explanation
bc = datasets.load_breast_cancer()
x = bc.data[:,2:]
y = bc.target
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
dt
Explanation: Weird. Small dataset, I guess?
3. Load the breast cancer dataset (datasets.load_breast_cancer()) and perform basic exploratory analysis. What attributes to we have? What are we trying to predict?
For context of the data, see the documentation here: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29
Attribute Information:
1) ID number
2) Diagnosis (M = malignant, B = benign)
3-32)
Ten real-valued features are computed for each cell nucleus:
a) radius (mean of distances from center to points on the perimeter)
b) texture (standard deviation of gray-scale values)
c) perimeter
d) area
e) smoothness (local variation in radius lengths)
f) compactness (perimeter^2 / area - 1.0)
g) concavity (severity of concave portions of the contour)
h) concave points (number of concave portions of the contour)
i) symmetry
j) fractal dimension ("coastline approximation" - 1)
End of explanation
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.50,train_size=0.50)
measure_performance(x_train,y_train,dt)
measure_performance(x_test,y_test,dt)
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.25,train_size=0.75)
measure_performance(x_train,y_train,dt)
Explanation: 4. Using the breast cancer data, create a classifier to predict the type of seed. Perform the above hold out evaluation (50-50 and 75-25) and discuss the results.
End of explanation
from sklearn import tree
from sklearn.externals.six import StringIO
import pydotplus
dt = tree.DecisionTreeClassifier()
dt = dt.fit(x,y)
with open("bc.dot", 'w') as f:
f = tree.export_graphviz(dt, out_file=f)
import os
os.unlink('bc.dot')
dot_data = StringIO()
tree.export_graphviz(dt, out_file=dot_data) #brew install graphviz
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("bc.pdf")
from IPython.display import IFrame
IFrame("bc.pdf", width=800, height=800)
Explanation: Going to be honest, I barely have any idea what I'm looking at here. I believe we are trying to predict malignancy of cancer cells, but my understanding of the process to get here is such that I really just copied what we did in class and applied it here. I couldn't say that I used all 10 of the variables described by the dataset or that I just applied the diagnosis to itself as a predictor. This is about as clear as mud to me currently. Maybe a visual tree will help.
End of explanation |
8,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CHAPTER 3
Logging
As you are exploring and, later, using bestPy you might want to keep track (in a discrete way) of what happens under the hood. For that purpose, a convenient logging faciĺity is built into bestPy that keeps you up to date.
Preliminaries
We only need this because the examples folder is a subdirectory of the bestPy package.
Step1: Import
We are not going to actually recommend anything in the present chapter. We just want to take a closer look at the warnings issued when reading transaction data from CSV file in the last chapter 2. To recreate these warnings, all we need to import is Transactions from bestPy.datastructures.
Step2: Read transaction data
Step3: There they are again! While it is maybe helpful to have the warnings pop up like this in a Jupyter notbook, it is not clear how to benefit from this feature when writing a standalone python program or service. Also, having a lot of them might mess up your tidy notebook layout.
In fact, these messages aren't intended to pop up in the Jupyter notebook at all! Rather, they are intended to be written to a logfile together with other information (as well as some warnings and errors while your are still experimenting with bestPy). We will make it best practice, then, to always enable bestPys logging facilities before doing anything else. The logging function is conveniently accessible through the top-level package.
python
from bestPy import write_log_to
Tab completion reveals that the write_log_to() function has two arguments. The first is the path to and name of the logfile to be written and the second is the logging level, which can have the following (integer) values | Python Code:
import sys
sys.path.append('../..')
Explanation: CHAPTER 3
Logging
As you are exploring and, later, using bestPy you might want to keep track (in a discrete way) of what happens under the hood. For that purpose, a convenient logging faciĺity is built into bestPy that keeps you up to date.
Preliminaries
We only need this because the examples folder is a subdirectory of the bestPy package.
End of explanation
from bestPy.datastructures import Transactions
Explanation: Import
We are not going to actually recommend anything in the present chapter. We just want to take a closer look at the warnings issued when reading transaction data from CSV file in the last chapter 2. To recreate these warnings, all we need to import is Transactions from bestPy.datastructures.
End of explanation
file = 'examples_data.csv'
data = Transactions.from_csv(file)
Explanation: Read transaction data
End of explanation
import sys
sys.path.append('../..')
from bestPy import write_log_to
from bestPy.datastructures import Transactions
logfile = 'logfile.txt'
write_log_to(logfile, log_level=20)
file = 'examples_data.csv'
data = Transactions.from_csv(file)
Explanation: There they are again! While it is maybe helpful to have the warnings pop up like this in a Jupyter notbook, it is not clear how to benefit from this feature when writing a standalone python program or service. Also, having a lot of them might mess up your tidy notebook layout.
In fact, these messages aren't intended to pop up in the Jupyter notebook at all! Rather, they are intended to be written to a logfile together with other information (as well as some warnings and errors while your are still experimenting with bestPy). We will make it best practice, then, to always enable bestPys logging facilities before doing anything else. The logging function is conveniently accessible through the top-level package.
python
from bestPy import write_log_to
Tab completion reveals that the write_log_to() function has two arguments. The first is the path to and name of the logfile to be written and the second is the logging level, which can have the following (integer) values:
+ 10 ... debug
+ 20 ... info
+ 30 ... warning
+ 40 ... error
+ 50 ... critical
Any event with a logging level lower than the one specified will not appear in the logfile. You might want to start with 20 for info to learn which events are logged and then swotch to 30 for warning later.
To see how logging works in practice, you will first need to restart the Kernel of this Jupyter notebook (Menu: Kernel --> Restart). Then, we
+ make again sure we have bestPy in our PYTHONPATH
+ do our imports again
+ intialize logging
+ read transaction data again
End of explanation |
8,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EECS 445
Step1: Python Basics
Data Types
Containers
Functions
Classes
Basic data types
Numbers
Integers and floats work as you would expect from other languages
Step2: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Step3: Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.)
Step4: Now we let's look at the operations
Step5: Strings
Step6: You can find a list of all string methods in the document.
Containers
Python includes several built-in container types
Step7: Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing
Step8: As usual, you can find all the gory details about lists in the documentation.
Loops
You can loop over the elements of a list like this
Step9: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step10: List comprehensions
Step11: You can make this code simpler using a list comprehension
Step12: List comprehensions can also contain conditions
Step13: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in C++. You can use it like this
Step14: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary
Step15: If you want access to keys and their corresponding values, use the items method
Step16: Dictionary comprehensions
Step17: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following
Step18: Loops
Step19: Set comprehensions
Step20: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example
Step21: Functions
Python functions are defined using the def keyword. For example
Step22: We will often define functions to take optional keyword arguments, like this
Step23: Classes
The syntax for defining classes in Python is straightforward
Step24: Modules
import modules
numpy
matplotlib
scikit-learn
Step25: NumPy
NumPy arrays, dtype, and shape
Reshape and Update In-Place
Combine Arrays
Array Math
Inner Product
Matrixes
To use Numpy, we first need to import the numpy package
Step26: Numpy also provides many functions to create arrays
Step27: Array indexing
Numpy offers several ways to index into arrays.
Slicing
Step28: Reshape and Update In-Place
Step29: Combine Arrays
Step30: Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module
Step31: Broadcasting
Arrays with different dimensions can also perform above operations.
Step32: We can also get statistical results directly using sum, mean and std methods.
Step33: Inner Product
$$
(a_1, a_2, a_3, ..., a_n) \cdot (b_1, b_2, b_3, ..., b_n)^T = \sum_{i = 1}^{n}{a_ib_i}
$$
We use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step34: Matrix
Instead of arrays, we can also use matrix to simplify the code.
Step35: You can find more in the document.
Matplotlib
Plotting Lines
Plotting Multiple Lines
Scatter Plots
Legend, Titles, etc.
Subplots
Histogram
Step36: To make pylab work inside ipython
Step37: Subplots
You can plot different things in the same figure using the subplot function. Here is an example
Step38: Scikit-learn
This is a common machine learning package with lots of algorithms, you can find detailed usage here.
Here is an example of KMeans cluster algorithm | Python Code:
print ('Hello Python!')
Explanation: EECS 445: Python Tutorial
Presented by: Zhao Fu
September 12, 2016
References:
1. https://docs.python.org/3/tutorial/
2. https://docs.python.org/3/library/
3. http://cs231n.github.io/python-numpy-tutorial/
4. https://github.com/donnemartin/data-science-ipython-notebooks
Why Python?
Easy to learn
High-level data structures
Elegant syntax
Lots of useful packages for machine learning and data science
Install
https://www.continuum.io/downloads
Now we have python3 installed
numpy
scipy
scikit-learn
matplotlib
...
To install packages:
bash
conda install <PACKAGE_NAME>
bash
pip install <PACKAGE_NAME>
Let's run our slides first!
jupyter notebook
Want more fancy stuff? Just install RISE!
conda install -c damianavila82 rise
Play with your toys!
Here is an option to play with if you can't set up jupyter on your own computer: https://tmpnb.org.
End of explanation
x = 3
print (x, type(x))
print (x + 3) # Addition;
print (x - x) # Subtraction;
print (x * 2) # Multiplication;
print (x ** 3) # Exponentiation;
print (x)
x += 1
print (x)
x = x + 1
print (x) # Prints "4"
x *= 2
print (x) # Prints "8"
y = 2.5
print (type(y)) # Prints "<type 'float'>"
print (y, y + 1, y * 2, y ** 2) # Prints "2.5 3.5 5.0 6.25"
Explanation: Python Basics
Data Types
Containers
Functions
Classes
Basic data types
Numbers
Integers and floats work as you would expect from other languages:
End of explanation
print (17 / 3) # return float
print (17 // 3) # return integer
print (17 % 3) # Modulo operation
Explanation: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
End of explanation
t, f = True, False # Note the Captilzation!
print (type(t)) # Prints "<type 'bool'>"
Explanation: Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):
End of explanation
print (t and f) # Logical AND;
print (t or f) # Logical OR;
print (not t) # Logical NOT;
print (t != f) # Logical XOR;
Explanation: Now we let's look at the operations:
End of explanation
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter.
print (hello, len(hello))
hw = hello + ' ' + world # String concatenation
print (hw) # prints "hello world"
# sprintf style string formatting
hw12 = '%s %s %d' % (hello, world, 12)
# Recommended formatting style for Py3.0+ (https://pyformat.info)
new_py3_hw12 = '{:>15} {:1.1f} {}'.format('hello' + ' ' + 'world', 1, 2)
print (hw12)
print (new_py3_hw12)
s = "hello"
print (s.capitalize()) # Capitalize a string; prints "Hello"
print (s.upper()) # Convert a string to uppercase; prints "HELLO"
print (s.rjust(7)) # Right-justify a string, padding with spaces; prints " hello"
print (s.center(7)) # Center a string, padding with spaces; prints " hello "
print (s.replace('ll', '(ell)')) # Replace all instances of one substring with another;
# prints "he(ell)(ell)o"
print (' world '.strip()) # Strip leading and trailing whitespace; prints "world"
"You can type ' inside"
'You can type \' inside'
Explanation: Strings
End of explanation
x = [1, 2, 3, 'a', 'b', 'c'] + ['hello'] # list append with the + operator
print (x, x[2]) # access by index
print (x[0]) # index can be negative
x.append('element')
print (x)
print (x.pop(), x)
Explanation: You can find a list of all string methods in the document.
Containers
Python includes several built-in container types: lists, dictionaries, sets, and tuples.
Lists
A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:
End of explanation
x = [1, 2, 3, 4, 5]
print (x[2:])
print (x[:3])
print (x[2:5])
x[0:3] = ['a', 'b', 'c'] # modify elements in list
print (x)
y = x[:] # copy list
y[2] = 100 # x won't change
print ('y:', y)
print ('x:', x)
Explanation: Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
End of explanation
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print (animal)
Explanation: As usual, you can find all the gory details about lists in the documentation.
Loops
You can loop over the elements of a list like this:
End of explanation
animals = ['cat', 'dog', 'monkey']
print (enumerate(animals))
for idx, animal in enumerate(animals):
print ('#%d: %s' % (idx + 1, animal))
Explanation: If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print (squares)
Explanation: List comprehensions:
When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print (squares)
Explanation: You can make this code simpler using a list comprehension:
End of explanation
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
even_squares_alt = [i ** 2 for i in filter(lambda k: k % 2 == 0 , nums)]
print (even_squares_alt)
nums = [0, 1, 2, 3, 4]
even_squares_or_one = [x ** 2 if x % 2 == 0 else 1 for x in nums]
print (even_squares_or_one)
Explanation: List comprehensions can also contain conditions:
End of explanation
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print (d['cat']) # Get an entry from a dictionary; prints "cute"
print ('cute' in d) # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print (d['fish']) # Prints "wet"
print (d['monkey']) # KeyError: 'monkey' not a key of d
print (d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A"
print (d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print (d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
Explanation: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in C++. You can use it like this:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal in d:
legs = d[animal]
print ('A %s has %d legs' % (animal, legs))
Explanation: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print ('A %s has %d legs' % (animal, legs))
Explanation: If you want access to keys and their corresponding values, use the items method:
End of explanation
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print (even_num_to_square)
# Make a dictionary from two lists using zip
l1 = ['EECS445', 'EECS545']
l2 = ['Undergraduate ML', 'Graduate ML']
d = dict(zip(l1, l2))
print (d)
# Unroll dictionary into two tuples
k, v = list(d.keys()), list(d.values())
print (d.items())
print (k, v)
Explanation: Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
End of explanation
animals = {'cat', 'dog'}
print ('cat' in animals) # Check if an element is in a set; prints "True"
print ('fish' in animals) # prints "False"
animals.add('fish') # Add an element to a set
print ('fish' in animals)
print (len(animals)) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print (len(animals))
animals.remove('cat') # Remove an element from a set
print (len(animals))
Explanation: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following:
End of explanation
animals = {'dog', 'fish', 'cat'}
for idx, animal in enumerate(animals):
print ('#%d: %s' % (idx + 1, animal))
# Prints "#1: fish", "#2: dog", "#3: cat"
Explanation: Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:
End of explanation
from math import sqrt
print ({int(sqrt(x)) for x in range(30)})
Explanation: Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:
End of explanation
d = {(x, x + 1): x for x in range(0, 10, 2)} # Create a dictionary with tuple keys, note that range can use step args.
t = (0, 1) # Create a tuple
print (type(t))
print (d[t])
print (d[(2, 3)])
t[0] = 1
Explanation: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:
End of explanation
def get_GPA(x):
if x >= 90:
return "A"
elif x >= 75:
return "B"
elif x >=60:
return "C"
else:
return "F"
for x in [59, 70, 91]:
print (get_GPA(x))
Explanation: Functions
Python functions are defined using the def keyword. For example:
End of explanation
def fib(n = 10):
a = 0
b = 1
while b < n:
print(b, end=',')
a, b = b, a + b
fib()
Explanation: We will often define functions to take optional keyword arguments, like this:
End of explanation
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print ('HELLO, %s!' % self.name.upper())
else:
print ('Hello, %s' % self.name)
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
Explanation: Classes
The syntax for defining classes in Python is straightforward:
End of explanation
from modules import fibo
from modules.fibo import fib2
print (fib2(10))
print (fibo.fib2(10))
Explanation: Modules
import modules
numpy
matplotlib
scikit-learn
End of explanation
import numpy as np
a = np.array([1, 2, 3])
print(a)
print(a.shape)
print(a.dtype)
b = np.array([[0, 2, 4], [1, 3, 5]], dtype = np.float64)
print(b)
print(b.shape)
print(b.dtype)
Explanation: NumPy
NumPy arrays, dtype, and shape
Reshape and Update In-Place
Combine Arrays
Array Math
Inner Product
Matrixes
To use Numpy, we first need to import the numpy package:
End of explanation
np.zeros(5) # Create an array of all zeros
np.ones(shape=(3, 4), dtype = np.int32) # Create an array of all ones
np.full((2,2), 7, dtype = np.int32) # Create a constant array
np.eye(2) # Create a 2x2 identity matrix
np.random.random((2,2)) # Create an array filled with random values
Explanation: Numpy also provides many functions to create arrays:
End of explanation
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print (b)
print (a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print (a[0, 1])
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print (a)
print (row_r1, row_r1.shape)
print (row_r2, row_r2.shape)
print (row_r3, row_r3.shape)
Explanation: Array indexing
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
End of explanation
e = np.arange(12)
print(e)
# f is a view of contents of e
f = e.reshape(3, 4)
print(f)
# Set values of e from index 5 onwards to 0
e[7:] = 0
print (e)
# f is also updated
print (f)
# We can get transpose of array by T attribute
print (f.T)
Explanation: Reshape and Update In-Place
End of explanation
a = np.array([1, 2, 3])
print(np.concatenate([a, a, a]))
b = np.array([[1, 2, 3], [4, 5, 6]])
d = b / 2.0
# Use broadcasting when needed to do this automatically
print (np.vstack([a, b, d]))
# In machine learning, useful to enrich or
# add new/concatenate features with hstack
np.hstack([b, d])
print (np.concatenate([b, d], axis = 0))
Explanation: Combine Arrays
End of explanation
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print (x + y)
print (np.add(x, y))
# Elementwise difference; both produce the array
print (x - y)
print (np.subtract(x, y))
# Elementwise product; both produce the array
print (x * y)
print (np.multiply(x, y))
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print (x / y)
print (np.divide(x, y))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print (np.sqrt(x))
Explanation: Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
End of explanation
# Multiply single number
print (x * 0.5)
a = np.array([1, 2, 3])
b = np.array([[1, 2, 3], [4, 5, 6]])
c = a + b
print(a.reshape(1, 3).shape, b.shape, c.shape)
print(c)
a.reshape((1, 1, 3)) + c.reshape((2, 1, 3))
Explanation: Broadcasting
Arrays with different dimensions can also perform above operations.
End of explanation
print (d)
print (d.sum())
print (d.sum(axis = 0))
print (d.mean())
print (d.mean(axis = 1))
print (d.std())
print (d.std(axis = 0))
Explanation: We can also get statistical results directly using sum, mean and std methods.
End of explanation
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print (v.dot(w))
print (np.dot(v, w))
# Matrix / vector product; both produce the rank 1 array [29 67]
print (x.dot(v))
print (np.dot(x, v))
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print (x.dot(y))
print (np.dot(x, y))
Explanation: Inner Product
$$
(a_1, a_2, a_3, ..., a_n) \cdot (b_1, b_2, b_3, ..., b_n)^T = \sum_{i = 1}^{n}{a_ib_i}
$$
We use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
End of explanation
x = np.matrix('1, 2, 3; 4, 5, 6')
y = np.matrix(np.ones((3, 4)))
print(x.shape)
print(y.shape)
print(x * y)
print(y.T * x.T)
Explanation: Matrix
Instead of arrays, we can also use matrix to simplify the code.
End of explanation
import pylab as plt
Explanation: You can find more in the document.
Matplotlib
Plotting Lines
Plotting Multiple Lines
Scatter Plots
Legend, Titles, etc.
Subplots
Histogram
End of explanation
%matplotlib inline
plt.plot([1,2,3,4], 'o-')
plt.ylabel('some numbers')
plt.show()
x = np.linspace(0,1,100);
y1 = x ** 2;
y2 = np.sin(x);
plt.plot(x, y1, 'r-', label="parabola");
plt.plot(x, y2, 'g-', label="sine");
plt.legend();
plt.xlabel("x axis");
plt.show()
# Create sample data, add some noise
x = np.random.uniform(1, 100, 1000)
y = np.log(x) + np.random.normal(0, .3, 1000)
plt.scatter(x, y)
plt.show()
Explanation: To make pylab work inside ipython:
End of explanation
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# First plot
plt.subplot(2, 1, 1)
plt.plot(x, y_sin)
plt.title('Sine')
# Second plot
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
plt.show()
Explanation: Subplots
You can plot different things in the same figure using the subplot function. Here is an example:
End of explanation
from sklearn.cluster import KMeans
mu1 = [5, 5]
mu2 = [0, 0]
cov1 = [[1, 0], [0, 1]]
cov2 = [[2, 1], [1, 3]]
x1 = np.random.multivariate_normal(mu1, cov1, 1000)
x2 = np.random.multivariate_normal(mu2, cov2, 1000)
print (x1.shape)
print (x2.shape)
plt.plot(x1[:, 0], x1[:, 1], 'r.')
plt.plot(x2[:, 0], x2[:, 1], 'b.')
plt.show()
x = np.vstack([x1, x2])
print (x.shape)
plt.plot(x[:, 0], x[:, 1], 'b.')
plt.show()
y_pred = KMeans(n_clusters=2).fit_predict(x)
x_pred1 = x[y_pred == 0, :]
x_pred2 = x[y_pred == 1, :]
print (x_pred1.shape)
print (x_pred2.shape)
plt.plot(x_pred1[:, 0], x_pred1[:, 1], 'b.')
plt.plot(x_pred2[:, 0], x_pred2[:, 1], 'r.')
plt.show()
Explanation: Scikit-learn
This is a common machine learning package with lots of algorithms, you can find detailed usage here.
Here is an example of KMeans cluster algorithm:
End of explanation |
8,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Apache Spark Streaming
http
Step1: Python Example
Step2: 2
You need this
Step3: Python Example | Python Code:
import org.apache.spark._
import org.apache.spark.streaming._
val conf = new SparkConf().setMaster("local[*]").setAppName("Example")
val ssc = new StreamingContext(conf, Seconds(1))
Explanation: Apache Spark Streaming
http://spark.apache.org/streaming/
Documentation URL:
http://spark.apache.org/docs/latest/streaming-programming-guide.html
Python reference:
http://spark.apache.org/docs/latest/api/python/index.html
Scala reference:
http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.package
1
You need this : a StreamingContext object to do any streaming task, similar to a SparkContext.
Scala Example :
End of explanation
from pyspark import SparkContext
from pyspark.streaming import StreamingContext
sc = SparkContext("local[*]", "Example") # Created a SparkContext object and is being passed to the StreamingContext
ssc = StreamingContext(sc, batchDuration=1) # batchDuration accepts value in seconds
Explanation: Python Example:
End of explanation
// Create a DStream that will connect to hostname:port, like localhost:9999
val lines = ssc.socketTextStream("localhost", 9999)
Explanation: 2
You need this : a DStream object, its a sequence of RDDs.
http://spark.apache.org/docs/latest/streaming-programming-guide.html#discretized-streams-dstreams
Input Sources: The following examples, use a TCP Socket as an input sources. We can group the input types as,
Basic sources : Sockets, File systems
http://spark.apache.org/docs/latest/streaming-programming-guide.html#basic-sources
Advanced sources : Kafka, Flume, etc
http://spark.apache.org/docs/latest/streaming-programming-guide.html#advanced-sources
Scala Example :
End of explanation
# Create a DStream that will connect to hostname:port, like localhost:9999
lines = ssc.socketTextStream("localhost", 9999)
Explanation: Python Example:
End of explanation |
8,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A brief note about pseudo-random numbers
When carrying out simulations, it is typical to use random number generators. Most computers can not generate true random numbers -- instead we use algorithms that approximate the generation of random numbers (pseudo-random number generators). One important difference between a true random number generator and a pseudo-random number generator is that a series of pseudo-random numbers can be regenerated if you know the "seed" value that initialized the algorithm. We can specifically set this seed value, so that we can guarantee that two different people evaluating this notebook get the same results, even though we're using (pseudo)random numbers in our simulation.
Step1: Generating a population to sample from
We'll start by simulating our "population of interest" -- i.e. the population we want to make inferences about. We'll assume that our variable of interest (e.g. circulating stress hormone levels) is normally distributed with a mean of 10 nM and a standard deviation of 1 nM.
Step2: Take a random sample of the population of interest
We'll use the np.random.choice function to take a sample from our population of interest.
Step3: Take a second random sample of size 25
Step4: Compare the first and second samples
Step5: ## Generate a large number of samples of size 25
Every time we take a random sample from our population of interest we'll get a different estimate of the mean and standard deviation (or whatever other statistics we're interested in). To explore how well random samples of size 25 perform, generally, in terms of estimating the mean and standard deviation of the population of interest we need a large number of such samples.
It's tedious to take one sample at a time, so we'll generate 100 samples of size 25, and calculate the mean and standard deviation for each of those samples (storing the means and standard deviations in lists).
Step6: Relative Frequency Histogram
A relative frequency histogram is like a frequency histogram, except the bin heights are given in fractions of the total sample size (relative frequency) rather than absolute frequency. This is equivalent to adding the constraint that the total height of all the bars in the histogram will add to 1.0.
Step7: Density histogram
If instead of constraining the total height of the bars, we constrain the total area of the bars to sum to one, we call this a density histogram. When comparing histograms based on different numbers of samples, with different bin width, etc. you should usually use the density histogram.
The argument normed=True to the pyplot.hist function will this function calculate a density histogram instead of the default frequency histogram.
Step8: How does the spread of our estimates of the mean change as sample size increases?
What happens as we increase the size of our samples? Let's draw 100 random samples of size 50, 100, and 200 observations to compare.
Step9: Standard Error of the Mean
We see from the graph above that our estimates of the mean cluster more tightly about the true mean as our sample size increases. Let's quantify that by calculating the standard deviation of our mean estimates as a function of sample size.
The standard deviation of the sampling distribution of a statistic of interest is called the "Standard Error" of that statistic. Here, through simulation, we are estimating the "Standard Error of the Mean".
Step10: You can show mathematically for normally distributed data, that the expected Standard Error of the Mean as a function of sample size is
Step11: Standard Errors of the Standard Deviation
Above we explored how the spread in our estimates of the mean changed with sample size. We can similarly explore how our estimates of the standard deviation of the population change as we vary our sample size.
Step12: You can show mathematically for normally distributed data, that the expected Standard Error of the Standard Deviation is approximately
$$
\mbox{Standard Error of Standard Deviation} \approx \frac{\sigma}{\sqrt{2(n-1)}}
$$
where $\sigma$ is the population standard deviation, and $n$ is the sample size.
Let's compare that theoretical expectation to our simulated estimates. | Python Code:
# set the seed for the pseudo-random number generator
# the seed is any 32 bit integer
# different seeds will generate different results for the
# simulations that follow
np.random.seed(20160208)
Explanation: A brief note about pseudo-random numbers
When carrying out simulations, it is typical to use random number generators. Most computers can not generate true random numbers -- instead we use algorithms that approximate the generation of random numbers (pseudo-random number generators). One important difference between a true random number generator and a pseudo-random number generator is that a series of pseudo-random numbers can be regenerated if you know the "seed" value that initialized the algorithm. We can specifically set this seed value, so that we can guarantee that two different people evaluating this notebook get the same results, even though we're using (pseudo)random numbers in our simulation.
End of explanation
popn = np.random.normal(loc=10, scale=1, size=6500)
plt.hist(popn,bins=50)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
print("Mean glucorticoid concentration:", np.mean(popn))
print("Standard deviation of glucocorticoid concentration:", np.std(popn))
Explanation: Generating a population to sample from
We'll start by simulating our "population of interest" -- i.e. the population we want to make inferences about. We'll assume that our variable of interest (e.g. circulating stress hormone levels) is normally distributed with a mean of 10 nM and a standard deviation of 1 nM.
End of explanation
sample1 = np.random.choice(popn, size=25)
plt.hist(sample1)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
np.mean(sample1), np.std(sample1,ddof=1)
Explanation: Take a random sample of the population of interest
We'll use the np.random.choice function to take a sample from our population of interest.
End of explanation
sample2 = np.random.choice(popn, size=25)
np.mean(sample2), np.std(sample2,ddof=1)
Explanation: Take a second random sample of size 25
End of explanation
plt.hist(sample1)
plt.hist(sample2,alpha=0.5)
plt.xlabel("Glucorticoid concentration (nM)")
plt.ylabel("Frequency")
pass
Explanation: Compare the first and second samples
End of explanation
means25 = []
std25 = []
for i in range(100):
s = np.random.choice(popn, size=25)
means25.append(np.mean(s))
std25.append(np.std(s,ddof=1))
plt.hist(means25,bins=15)
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Frequency")
plt.title("Distribution of estimates of the\n mean glucocorticoid concentration\n for 100 samples of size 25")
plt.vlines(np.mean(popn), 0, 18, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
Explanation: ## Generate a large number of samples of size 25
Every time we take a random sample from our population of interest we'll get a different estimate of the mean and standard deviation (or whatever other statistics we're interested in). To explore how well random samples of size 25 perform, generally, in terms of estimating the mean and standard deviation of the population of interest we need a large number of such samples.
It's tedious to take one sample at a time, so we'll generate 100 samples of size 25, and calculate the mean and standard deviation for each of those samples (storing the means and standard deviations in lists).
End of explanation
# Relative Frequency Histogram
plt.hist(means25, bins=15, weights=np.ones_like(means25) * (1.0/len(means25)))
plt.xlabel("mean glucocorticoid concentration")
plt.ylabel("Relative Frequency")
plt.vlines(np.mean(popn), 0, 0.20, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
Explanation: Relative Frequency Histogram
A relative frequency histogram is like a frequency histogram, except the bin heights are given in fractions of the total sample size (relative frequency) rather than absolute frequency. This is equivalent to adding the constraint that the total height of all the bars in the histogram will add to 1.0.
End of explanation
plt.hist(means25,bins=15,normed=True)
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Density")
plt.vlines(np.mean(popn), 0, 2.5, linestyle='dashed', color='red',label="True Mean")
plt.legend(loc="upper right")
pass
Explanation: Density histogram
If instead of constraining the total height of the bars, we constrain the total area of the bars to sum to one, we call this a density histogram. When comparing histograms based on different numbers of samples, with different bin width, etc. you should usually use the density histogram.
The argument normed=True to the pyplot.hist function will this function calculate a density histogram instead of the default frequency histogram.
End of explanation
means50 = []
std50 = []
for i in range(100):
s = np.random.choice(popn, size=50)
means50.append(np.mean(s))
std50.append(np.std(s,ddof=1))
means100 = []
std100 = []
for i in range(100):
s = np.random.choice(popn, size=100)
means100.append(np.mean(s))
std100.append(np.std(s,ddof=1))
means200 = []
std200 = []
for i in range(100):
s = np.random.choice(popn, size=200)
means200.append(np.mean(s))
std200.append(np.std(s,ddof=1))
# the label arguments get used when we create a legend
plt.hist(means25, normed=True, alpha=0.75, histtype="stepfilled", label="n=25")
plt.hist(means50, normed=True, alpha=0.75, histtype="stepfilled", label="n=50")
plt.hist(means100, normed=True, alpha=0.75, histtype="stepfilled", label="n=100")
plt.hist(means200, normed=True, alpha=0.75, histtype="stepfilled", label="n=200")
plt.xlabel("Mean glucocorticoid concentration")
plt.ylabel("Density")
plt.vlines(np.mean(popn), 0, 7, linestyle='dashed', color='black',label="True Mean")
plt.legend()
pass
Explanation: How does the spread of our estimates of the mean change as sample size increases?
What happens as we increase the size of our samples? Let's draw 100 random samples of size 50, 100, and 200 observations to compare.
End of explanation
sm25 = np.std(means25,ddof=1)
sm50 = np.std(means50,ddof=1)
sm100 = np.std(means100,ddof=1)
sm200 = np.std(means200, ddof=1)
x = [25,50,100,200]
y = [sm25,sm50,sm100,sm200]
plt.scatter(x,y)
plt.xlabel("Sample size")
plt.ylabel("Std Dev of Mean Estimates")
pass
Explanation: Standard Error of the Mean
We see from the graph above that our estimates of the mean cluster more tightly about the true mean as our sample size increases. Let's quantify that by calculating the standard deviation of our mean estimates as a function of sample size.
The standard deviation of the sampling distribution of a statistic of interest is called the "Standard Error" of that statistic. Here, through simulation, we are estimating the "Standard Error of the Mean".
End of explanation
x = [25,50,100,200]
y = [sm25,sm50,sm100,sm200]
theory = [np.std(popn)/np.sqrt(i) for i in range(10,250)]
plt.scatter(x,y, label="Simulation estimates")
plt.plot(range(10,250), theory, color='red', label="Theoretical expectation")
plt.xlabel("Sample size")
plt.ylabel("Std Error of Mean")
plt.legend()
plt.xlim(0,300)
pass
Explanation: You can show mathematically for normally distributed data, that the expected Standard Error of the Mean as a function of sample size is:
$$
\mbox{Standard Error of Mean} = \frac{\sigma}{\sqrt{n}}
$$
where $\sigma$ is the population standard deviation, and $n$ is the sample size.
Let's compare that theoretical expectation to our simulated estimates.
End of explanation
# the label arguments get used when we create a legend
plt.hist(std25, normed=True, alpha=0.75, histtype="stepfilled", label="n=25")
plt.hist(std50, normed=True, alpha=0.75, histtype="stepfilled", label="n=50")
plt.hist(std100, normed=True, alpha=0.75, histtype="stepfilled", label="n=100")
plt.hist(std200, normed=True, alpha=0.75, histtype="stepfilled", label="n=200")
plt.xlabel("Standard Deviation of Glucocorticoid Concentration")
plt.ylabel("Density")
plt.vlines(np.std(popn), 0, 9, linestyle='dashed', color='black',label="True Standard Deviation")
#plt.legend()
pass
Explanation: Standard Errors of the Standard Deviation
Above we explored how the spread in our estimates of the mean changed with sample size. We can similarly explore how our estimates of the standard deviation of the population change as we vary our sample size.
End of explanation
x = [25,50,100,200]
y = [ss25,ss50,ss100,ss200]
plt.scatter(x,y, label="Simulation estimates")
plt.xlabel("Sample size")
plt.ylabel("Std Error of Std Dev")
theory = [np.std(popn)/(np.sqrt(2.0*(i-1))) for i in range(10,250)]
plt.plot(range(10,250), theory, color='red', label="Theoretical expectation")
plt.xlim(0,300)
plt.legend()
pass
Explanation: You can show mathematically for normally distributed data, that the expected Standard Error of the Standard Deviation is approximately
$$
\mbox{Standard Error of Standard Deviation} \approx \frac{\sigma}{\sqrt{2(n-1)}}
$$
where $\sigma$ is the population standard deviation, and $n$ is the sample size.
Let's compare that theoretical expectation to our simulated estimates.
End of explanation |
8,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Sparsity preserving clustering Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a tf.keras model for MNIST to be pruned and clustered
Step3: Evaluate the baseline model and save it for later usage
Step4: Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude() API to prune the whole pre-trained model to achieve the model that is to be clustered in the next step. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide.
Define the model and apply the sparsity API
Note that the pre-trained model is used.
Step5: Fine-tune the model, check sparsity, and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
Step6: Define helper functions to calculate and print the sparsity of the model.
Step7: Check that the model kernels was correctly pruned. We need to strip the pruning wrapper first. We also create a deep copy of the model to be used in the next step.
Step8: Apply clustering and sparsity preserving clustering and check its effect on model sparsity in both cases
Next, we apply both clustering and sparsity preserving clustering on the pruned model and observe that the latter preserves sparsity on your pruned model. Note that we stripped pruning wrappers from the pruned model with tfmot.sparsity.keras.strip_pruning before applying the clustering API.
Step9: Check sparsity for both models.
Step10: Create 1.6x smaller models from clustering
Define helper function to get zipped model file.
Step11: Create a TFLite model from combining sparsity preserving weight clustering and post-training quantization
Strip clustering wrappers and convert to TFLite.
Step12: See the persistence of accuracy from TF to TFLite
Step13: You evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
import numpy as np
import tempfile
import zipfile
import os
Explanation: Sparsity preserving clustering Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/combine/sparse_clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/combine/sparse_clustering_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
This is an end to end example showing the usage of the sparsity preserving clustering API, part of the TensorFlow Model Optimization Toolkit's collaborative optimization pipeline.
Other pages
For an introduction to the pipeline and other available techniques, see the collaborative optimization overview page.
Contents
In the tutorial, you will:
Train a tf.keras model for the MNIST dataset from scratch.
Fine-tune the model with sparsity and see the accuracy and observe that the model was successfully pruned.
Apply weight clustering to the pruned model and observe the loss of sparsity.
Apply sparsity preserving clustering on the pruned model and observe that the sparsity applied earlier has been preserved.
Generate a TFLite model and check that the accuracy has been preserved in the pruned clustered model.
Compare the sizes of the different models to observe the compression benefits of applying sparsity followed by the collaborative optimization technique of sparsity preserving clustering.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
End of explanation
# Load MNIST dataset
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(28, 28)),
tf.keras.layers.Reshape(target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=12, kernel_size=(3, 3),
activation=tf.nn.relu),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Explanation: Train a tf.keras model for MNIST to be pruned and clustered
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Explanation: Evaluate the baseline model and save it for later usage
End of explanation
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.ConstantSparsity(0.5, begin_step=0, frequency=100)
}
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep()
]
pruned_model = prune_low_magnitude(model, **pruning_params)
# Use smaller learning rate for fine-tuning
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
pruned_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
pruned_model.summary()
Explanation: Prune and fine-tune the model to 50% sparsity
Apply the prune_low_magnitude() API to prune the whole pre-trained model to achieve the model that is to be clustered in the next step. For how best to use the API to achieve the best compression rate while maintaining your target accuracy, refer to the pruning comprehensive guide.
Define the model and apply the sparsity API
Note that the pre-trained model is used.
End of explanation
# Fine-tune model
pruned_model.fit(
train_images,
train_labels,
epochs=3,
validation_split=0.1,
callbacks=callbacks)
Explanation: Fine-tune the model, check sparsity, and evaluate the accuracy against baseline
Fine-tune the model with pruning for 3 epochs.
End of explanation
def print_model_weights_sparsity(model):
for layer in model.layers:
if isinstance(layer, tf.keras.layers.Wrapper):
weights = layer.trainable_weights
else:
weights = layer.weights
for weight in weights:
if "kernel" not in weight.name or "centroid" in weight.name:
continue
weight_size = weight.numpy().size
zero_num = np.count_nonzero(weight == 0)
print(
f"{weight.name}: {zero_num/weight_size:.2%} sparsity ",
f"({zero_num}/{weight_size})",
)
Explanation: Define helper functions to calculate and print the sparsity of the model.
End of explanation
stripped_pruned_model = tfmot.sparsity.keras.strip_pruning(pruned_model)
print_model_weights_sparsity(stripped_pruned_model)
stripped_pruned_model_copy = tf.keras.models.clone_model(stripped_pruned_model)
stripped_pruned_model_copy.set_weights(stripped_pruned_model.get_weights())
Explanation: Check that the model kernels was correctly pruned. We need to strip the pruning wrapper first. We also create a deep copy of the model to be used in the next step.
End of explanation
# Clustering
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 8,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS
}
clustered_model = cluster_weights(stripped_pruned_model, **clustering_params)
clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train clustering model:')
clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)
stripped_pruned_model.save("stripped_pruned_model_clustered.h5")
# Sparsity preserving clustering
from tensorflow_model_optimization.python.core.clustering.keras.experimental import (
cluster,
)
cluster_weights = cluster.cluster_weights
clustering_params = {
'number_of_clusters': 8,
'cluster_centroids_init': CentroidInitialization.KMEANS_PLUS_PLUS,
'preserve_sparsity': True
}
sparsity_clustered_model = cluster_weights(stripped_pruned_model_copy, **clustering_params)
sparsity_clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
print('Train sparsity preserving clustering model:')
sparsity_clustered_model.fit(train_images, train_labels,epochs=3, validation_split=0.1)
Explanation: Apply clustering and sparsity preserving clustering and check its effect on model sparsity in both cases
Next, we apply both clustering and sparsity preserving clustering on the pruned model and observe that the latter preserves sparsity on your pruned model. Note that we stripped pruning wrappers from the pruned model with tfmot.sparsity.keras.strip_pruning before applying the clustering API.
End of explanation
print("Clustered Model sparsity:\n")
print_model_weights_sparsity(clustered_model)
print("\nSparsity preserved clustered Model sparsity:\n")
print_model_weights_sparsity(sparsity_clustered_model)
Explanation: Check sparsity for both models.
End of explanation
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in kilobytes.
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)/1000
# Clustered model
clustered_model_file = 'clustered_model.h5'
# Save the model.
clustered_model.save(clustered_model_file)
#Sparsity Preserve Clustered model
sparsity_clustered_model_file = 'sparsity_clustered_model.h5'
# Save the model.
sparsity_clustered_model.save(sparsity_clustered_model_file)
print("Clustered Model size: ", get_gzipped_model_size(clustered_model_file), ' KB')
print("Sparsity preserved clustered Model size: ", get_gzipped_model_size(sparsity_clustered_model_file), ' KB')
Explanation: Create 1.6x smaller models from clustering
Define helper function to get zipped model file.
End of explanation
stripped_sparsity_clustered_model = tfmot.clustering.keras.strip_clustering(sparsity_clustered_model)
converter = tf.lite.TFLiteConverter.from_keras_model(stripped_sparsity_clustered_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
sparsity_clustered_quant_model = converter.convert()
_, pruned_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_and_clustered_tflite_file, 'wb') as f:
f.write(sparsity_clustered_quant_model)
print("Sparsity preserved clustered Model size: ", get_gzipped_model_size(sparsity_clustered_model_file), ' KB')
print("Sparsity preserved clustered and quantized TFLite model size:",
get_gzipped_model_size(pruned_and_clustered_tflite_file), ' KB')
Explanation: Create a TFLite model from combining sparsity preserving weight clustering and post-training quantization
Strip clustering wrappers and convert to TFLite.
End of explanation
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print(f"Evaluated on {i} results so far.")
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: See the persistence of accuracy from TF to TFLite
End of explanation
# Keras model evaluation
stripped_sparsity_clustered_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
_, sparsity_clustered_keras_accuracy = stripped_sparsity_clustered_model.evaluate(
test_images, test_labels, verbose=0)
# TFLite model evaluation
interpreter = tf.lite.Interpreter(pruned_and_clustered_tflite_file)
interpreter.allocate_tensors()
sparsity_clustered_tflite_accuracy = eval_model(interpreter)
print('Pruned, clustered and quantized Keras model accuracy:', sparsity_clustered_keras_accuracy)
print('Pruned, clustered and quantized TFLite model accuracy:', sparsity_clustered_tflite_accuracy)
Explanation: You evaluate the model, which has been pruned, clustered and quantized, and then see that the accuracy from TensorFlow persists in the TFLite backend.
End of explanation |
8,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example of DOV search methods for interpretations (informele stratigrafie)
Use cases explained below
Get 'informele stratigrafie' in a bounding box
Get 'informele stratigrafie' with specific properties
Get 'informele stratigrafie' in a bounding box based on specific properties
Select 'informele stratigrafie' in a municipality and return date
Get 'informele stratigrafie' based on fields not available in the standard output dataframe
Get 'informele stratigrafie' data, returning fields not available in the standard output dataframe
Step1: Get information about the datatype 'Informele stratigrafie'
Step2: A description is provided for the 'Informele stratigrafie' datatype
Step3: The different fields that are available for objects of the 'Informele stratigrafie' datatype can be requested with the get_fields() method
Step4: You can get more information of a field by requesting it from the fields dictionary
Step5: Example use cases
Get 'informele stratigrafie' in a bounding box
Get data for all the 'informele stratigrafie' interpretations that are geographically located within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG
Step6: The dataframe contains one 'informele stratigrafie' interpretation where three layers ('laag') were identified. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_interpretatie field one can request the details of this interpretation in a webbrowser
Step7: Get 'informele stratigrafie' with specific properties
Next to querying interpretations based on their geographic location within a bounding box, we can also search for interpretations matching a specific set of properties. For this we can build a query using a combination of the 'InformeleStratigrafie' fields and operators provided by the WFS protocol.
A list of possible operators can be found below
Step8: In this example we build a query using the PropertyIsEqualTo operator to find all interpretations that are within the community (gemeente) of 'Herstappe'
Step9: Once again we can use the pkey_interpretatie as a permanent link to the information of these interpretations
Step10: Get 'informele stratigrafie' in a bounding box based on specific properties
We can combine a query on attributes with a query on geographic location to get the interpretations within a bounding box that have specific properties.
The following example requests the interpretations of boreholes only, within the given bounding box.
(Note that the datatype of the literal parameter should be a string, regardless of the datatype of this field in the output dataframe.)
Step11: We can look at one of the interpretations in a webbrowser using its pkey_interpretatie
Step12: Select 'informele stratigrafie' in a municipality and return date
We can limit the columns in the output dataframe by specifying the return_fields parameter in our search.
In this example we query all the 'informele stratigrafie' interpretations in the city of Ghent and return their date
Step13: Get 'informele stratigrafie' based on fields not available in the standard output dataframe
To keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select interpretations as illustrated below.
For example, make a selection of the interpretations in municipality the of Antwerp, before 1/1/1900
Step14: Get 'informele stratigrafie' data, returning fields not available in the standard output dataframe
As denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the return_fields parameter of the search
Step15: Visualize results
Using Folium, we can display the results of our search on a map. | Python Code:
%matplotlib inline
import inspect, sys
# check pydov path
import pydov
Explanation: Example of DOV search methods for interpretations (informele stratigrafie)
Use cases explained below
Get 'informele stratigrafie' in a bounding box
Get 'informele stratigrafie' with specific properties
Get 'informele stratigrafie' in a bounding box based on specific properties
Select 'informele stratigrafie' in a municipality and return date
Get 'informele stratigrafie' based on fields not available in the standard output dataframe
Get 'informele stratigrafie' data, returning fields not available in the standard output dataframe
End of explanation
from pydov.search.interpretaties import InformeleStratigrafieSearch
itp = InformeleStratigrafieSearch()
Explanation: Get information about the datatype 'Informele stratigrafie'
End of explanation
itp.get_description()
Explanation: A description is provided for the 'Informele stratigrafie' datatype:
End of explanation
fields = itp.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
Explanation: The different fields that are available for objects of the 'Informele stratigrafie' datatype can be requested with the get_fields() method:
End of explanation
fields['Datum']
Explanation: You can get more information of a field by requesting it from the fields dictionary:
* name: name of the field
* definition: definition of this field
* cost: currently this is either 1 or 10, depending on the datasource of the field. It is an indication of the expected time it will take to retrieve this field in the output dataframe.
* notnull: whether the field is mandatory or not
* type: datatype of the values of this field
End of explanation
from pydov.util.location import Within, Box
df = itp.search(location=Within(Box(153145, 206930, 153150, 206935)))
df.head()
Explanation: Example use cases
Get 'informele stratigrafie' in a bounding box
Get data for all the 'informele stratigrafie' interpretations that are geographically located within the bounds of the specified box.
The coordinates are in the Belgian Lambert72 (EPSG:31370) coordinate system and are given in the order of lower left x, lower left y, upper right x, upper right y.
End of explanation
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
Explanation: The dataframe contains one 'informele stratigrafie' interpretation where three layers ('laag') were identified. The available data are flattened to represent unique attributes per row of the dataframe.
Using the pkey_interpretatie field one can request the details of this interpretation in a webbrowser:
End of explanation
[i for i,j in inspect.getmembers(sys.modules['owslib.fes'], inspect.isclass) if 'Property' in i]
Explanation: Get 'informele stratigrafie' with specific properties
Next to querying interpretations based on their geographic location within a bounding box, we can also search for interpretations matching a specific set of properties. For this we can build a query using a combination of the 'InformeleStratigrafie' fields and operators provided by the WFS protocol.
A list of possible operators can be found below:
End of explanation
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Herstappe')
df = itp.search(query=query)
df.head()
Explanation: In this example we build a query using the PropertyIsEqualTo operator to find all interpretations that are within the community (gemeente) of 'Herstappe':
End of explanation
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
Explanation: Once again we can use the pkey_interpretatie as a permanent link to the information of these interpretations:
End of explanation
from owslib.fes import PropertyIsEqualTo
query = PropertyIsEqualTo(
propertyname='Type_proef',
literal='Boring')
df = itp.search(
location=Within(Box(205000, 205000, 206000, 206000)),
query=query
)
df.head()
Explanation: Get 'informele stratigrafie' in a bounding box based on specific properties
We can combine a query on attributes with a query on geographic location to get the interpretations within a bounding box that have specific properties.
The following example requests the interpretations of boreholes only, within the given bounding box.
(Note that the datatype of the literal parameter should be a string, regardless of the datatype of this field in the output dataframe.)
End of explanation
for pkey_interpretatie in set(df.pkey_interpretatie):
print(pkey_interpretatie)
Explanation: We can look at one of the interpretations in a webbrowser using its pkey_interpretatie:
End of explanation
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Gent')
df = itp.search(query=query,
return_fields=('Datum',))
df.head()
df.describe()
Explanation: Select 'informele stratigrafie' in a municipality and return date
We can limit the columns in the output dataframe by specifying the return_fields parameter in our search.
In this example we query all the 'informele stratigrafie' interpretations in the city of Ghent and return their date:
End of explanation
from owslib.fes import And, PropertyIsEqualTo, PropertyIsLessThan
query = And([PropertyIsEqualTo(propertyname='gemeente',
literal='Antwerpen'),
PropertyIsLessThan(propertyname='Datum',
literal='1900-01-01')]
)
df = itp.search(query=query,
return_fields=('pkey_interpretatie', 'Datum'))
df.head()
Explanation: Get 'informele stratigrafie' based on fields not available in the standard output dataframe
To keep the output dataframe size acceptable, not all available WFS fields are included in the standard output. However, one can use this information to select interpretations as illustrated below.
For example, make a selection of the interpretations in municipality the of Antwerp, before 1/1/1900:
End of explanation
query = PropertyIsEqualTo(
propertyname='gemeente',
literal='Herstappe')
df = itp.search(query=query,
return_fields=('pkey_interpretatie', 'pkey_boring', 'pkey_sondering',
'x', 'y', 'Z_mTAW', 'gemeente', 'Auteurs', 'Proefnummer'))
df.head()
Explanation: Get 'informele stratigrafie' data, returning fields not available in the standard output dataframe
As denoted in the previous example, not all available fields are available in the default output frame to keep its size limited. However, you can request any available field by including it in the return_fields parameter of the search:
End of explanation
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
folium.Marker(loclist[loc], popup=df['Proefnummer'][loc]).add_to(marker_cluster)
fmap
Explanation: Visualize results
Using Folium, we can display the results of our search on a map.
End of explanation |
8,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Goal
Step1: T2 relaxation
Step2: Fit CEST for each slices and mouse | Python Code:
# Import Python Modules
import numpy as np
#import seaborn as sn
import matplotlib.pyplot as plt
%matplotlib inline
from pylab import *
import pandas as pd
# Import LOCAL functions written by me
from mylocal_functions import *
Explanation: Goal: Differentiate Infections, sterile inflammation, and healthy tissue using MRI
The following methods were used in this study:
1. T2 relaxation of the tissue without a contrast agent
2. Dynamic contrast-enhanced (DCE) MRI using Maltose as a T2-ex contrast agent
3. Chemical Exchange Saturation Transfer (CEST) MRI without a contrast agent
Author: Julio Cárdenas-Rodríguez, Ph.D.
email: [email protected]
Description of the data
A total of XX mice were used in this study. Each mouse was infected as follows:
- Right thigh: with approximatley 100 uL of a solution of XX CFU/mL of E. Coli.
- Left thigh: same dose but using a solution that contain heat-inactivated E. Coli.
Both thighs can be seen in each image, and a total of of five imaging slices were collected around the center of infection. The average signal for the following region of interest (ROIS) were collected for all slices:
Infected Site
Apparently Healthy Tissue on the right thigh
Sterile inflammation on the left thigh
Apparently Healthy Tissue on the left thigh
End of explanation
# Make list of all T2.txt files
T2_list = get_ipython().getoutput('ls ../Study_03_CBA/*T2.txt')
# Allocate variables needed for analysis
T2DF=pd.DataFrame()
TR=np.linspace(.012,.012*12,12)
# Fit T2 for all ROIs, slices and mice. construct dataframe
for names in T2_list:
#Convert txt file to array
YDataMatrix=txt_2_array(names)
#Estimate T2
T2time=fitT2(TR,YDataMatrix)
#convert to data frame
df_T2=pd.DataFrame(T2time.T,columns=["Infected","Healthy_Right","Sterile_Inflammation","Healthy_Left"])
#df_T2=pd.DataFrame(T2time.T,columns=["ROI-1","ROI-2","ROI-3","ROI-4"])
df_info=name_2_df(names)
df_final=pd.concat([df_T2,df_info], axis=1)
T2DF=T2DF.append(df_final,ignore_index=True)
# Plot distribution of estimated T2 for each slice
#T2DF[T2DF.Slice==1].iloc[:,:4].plot.density(); title("Slice 01"); xlim((0.025,.15))
#T2DF[T2DF.Slice==2].iloc[:,:4].plot.density(); title("Slice 02"); xlim((0.025,.15))
#T2DF[T2DF.Slice==3].iloc[:,:4].plot.density(); title("Slice 03"); xlim((0.025,.15))
#T2DF[T2DF.Slice==4].iloc[:,:4].plot.density(); title("Slice 04"); xlim((0.025,.15))
T2DF[T2DF.Slice==5].iloc[:,:4].plot.density(); title("Slice 05"); xlim((0.025,.15))
Explanation: T2 relaxation
End of explanation
# list of files
CEST_list=get_ipython().getoutput('ls ../Study_03_CBA/*CEST.txt')
CEST_DF=pd.DataFrame()
Z=np.zeros((4,110))
def normalize_data(DataMatrix):
rows,cols = DataMatrix.shape
newData = np.zeros_like(DataMatrix)
for row in range(rows):
newData[row,:]=DataMatrix[row,:]/DataMatrix[row,8]
return newData
for names in CEST_list:
#Convert txt file to array
D=txt_2_array(names);
Zn=normalize_data(D.T)
Z=np.concatenate((Z,Zn))
Z=Z[4::,9::]
# define offsets in ppm
a1=np.linspace(-55,-50,9)
ppm=np.linspace(-8,8,101)
full_ppm = np.concatenate((a1, ppm))
# Fit data
from scipy.optimize import curve_fit
import seaborn as sn
from mylocal_functions import *
def Lorentzian(sat_offset,Amp,Width,Center):
Width = Width**2; Width=Width/4
xdata = (sat_offset-Center)**2
return (Amp*Width) / (Width +xdata )
def Lorentzian2(sat_offset,a1,w1,c1,a2,w2,c2):
return Lorentzian(sat_offset,a1,w1,c1) + Lorentzian(sat_offset,a2,w2,c2)
#
Signal=1-Z[12,:]
# fix xdata
xdata=ppm-ppm[Signal.argmax()]
# allocate fitting based on this
A10, W10, C10 = 0.90, 1, 0
A20, W20, C20 = .1, 1, -4
A1L, W1L, C1L = 0.5, .1, -.1
A2L, W2L, C2L = 0, .1, -6
A1U, W1U, C1U = 1.0, 5, +.1
A2U, W2U, C2U = 1.0, 5, -1.0
scale0, scaleL, scaleU = 0, -1, +1
initial_guess = [A10, W10, C10, A20, W20, C20, scale0]
lb = [A1L, W1L, C1L, A2L, W2L, C2L, scaleL]
ub = [A1U, W1U, C1U, A2U, W2U, C2U, scaleU]
p, cov = curve_fit(Lscale, xdata, Signal,p0=initial_guess,bounds=(lb, ub))
print(pars_hat)
Yhat=Lscale(xdata,p[0],p[1],p[2],p[3],p[4],p[5],p[6]);
plt.figure(figsize=(10,5))
plt.plot(xdata,Signal,'o',label='Signal');
plt.plot(xdata,Yhat,'-',label='Signal');
from mylocal_functions import *
mylocal_functions.fit_L2_scale?
plt.plot(ppm,Lscale(ppm,A10, W10, C10, A20, W20, C20, scale0));
initial_guess = [A10, W10, C10, A20, W20, C20, scale0];
lb = [A1L, W1L, C1L, A2L, W2L, C2L, scaleL];
ub = [A1U, W1U, C1U, A2U, W2U, C2U, scaleU];
A=[[initial_guess],[initial_guess]]
array(A).shape
ppm[Signal.argmax()]
L= Lorentzian(ppm,1,1,1); plt.plot(L)
plt.plot(ppm,Z.T,'.'); plt.xlim(-10,10)
len(CEST_list)
Z=np.zeros?
Z=np.zeros
plt.plot(ppm,Z,'--'); plt.xlim(-10,10)
#Estimate T2
T2time=fitT2(TR,YDataMatrix)
#convert to data frame
df_T2=pd.DataFrame(T2time.T,columns=["Infected","Healthy_Right","Sterile_Inflammation","Healthy_Left"])
#df_T2=pd.DataFrame(T2time.T,columns=["ROI-1","ROI-2","ROI-3","ROI-4"])
df_info=name_2_df(names)
df_final=pd.concat([df_T2,df_info], axis=1)
T2DF=T2DF.append(df_final,ignore_index=True)
df_info=name_2_df(names)
df_info
# Make list of all T2.txt files
CEST_list=get_ipython().getoutput('ls ../Study_03_CBA/*T2.txt')
for names in CEST_list:
Ydata=txt_2_array(names)
print(Ydata)
df_info=name_2_df(names)
def scale(y,index):
return y/y[index
for names in CEST_list:
print(names)
Ydata=txt_2_array(names)
rows, cols = Ydata.shape
for i in range(cols):
ydata=Ydata[:,i]; ydata=ydata/ydata[9]; ydata=ydata[9:]
integral=np.sum(yd)
# Fit T2 for all ROIs, slices and mice. construct dataframe
for names in T2_list:
#Convert txt file to array
YDataMatrix=txt_2_array(names)
#Estimate T2
T2time=fitT2(TR,YDataMatrix)
#convert to data frame
df_T2=pd.DataFrame(T2time.T,columns=["Infected","Healthy_Right","Sterile_Inflammation","Healthy_Left"])
#df_T2=pd.DataFrame(T2time.T,columns=["ROI-1","ROI-2","ROI-3","ROI-4"])
df_info=name_2_df(names)
df_final=pd.concat([df_T2,df_info], axis=1)
T2DF=T2DF.append(df_final,ignore_index=True)
Explanation: Fit CEST for each slices and mouse
End of explanation |
8,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anscombe's quartet
Step1: Load the Anscombe's quartet dataset
Step2: And df is... a pandas dataframe
Step3: that we can print, plot, ...
Step4: Print just first dataset
Step5: Basic statistical parameters
Let's compare the basic statistical parameters of each dataset
Step6: Let's compare the correlation coefficient for each dataset
Step7: Plot
Plot datasets
Step8: Linear regression
Show the results of a linear regression within each dataset
Step9: It's the same line for all datasets
Let's plot with its 95% confidence interval region.
Step10: Key message
Visualize your data beforehand
Nonlinear regression? outliers?
One can fit a polynomial regression model to explore simple kinds of nonlinear trends in the dataset
Step11: In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals | Python Code:
#!conda install -y numpy pandas matplotlib seaborn statsmodels
%matplotlib inline
import seaborn as sns
import pandas as pd
sns.set(style="ticks")
Explanation: Anscombe's quartet
End of explanation
df = sns.load_dataset("anscombe")
Explanation: Load the Anscombe's quartet dataset
End of explanation
type(df)
Explanation: And df is... a pandas dataframe
End of explanation
df.head()
Explanation: that we can print, plot, ...
End of explanation
df[df.dataset == 'I']
Explanation: Print just first dataset
End of explanation
groups = ['I', 'II', 'III', 'IV']
for group in groups:
print(group)
print(df[df.dataset == group].describe())
print()
Explanation: Basic statistical parameters
Let's compare the basic statistical parameters of each dataset
End of explanation
for g in groups:
print(df[df.dataset == g]['x'].corr(df[df.dataset == g]['y']))
Explanation: Let's compare the correlation coefficient for each dataset
End of explanation
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4,
scatter_kws={"s": 50, "alpha": 1}, fit_reg=False)
Explanation: Plot
Plot datasets
End of explanation
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=None, palette="muted", size=4)
Explanation: Linear regression
Show the results of a linear regression within each dataset
End of explanation
sns.lmplot(x="x", y="y", col="dataset", hue="dataset", data=df,
col_wrap=2, ci=95, palette="muted", size=4)
Explanation: It's the same line for all datasets
Let's plot with its 95% confidence interval region.
End of explanation
sns.lmplot(x="x", y="y", data=df[df.dataset == 'II'],
order=2, ci=95, scatter_kws={"s": 80});
Explanation: Key message
Visualize your data beforehand
Nonlinear regression? outliers?
One can fit a polynomial regression model to explore simple kinds of nonlinear trends in the dataset
End of explanation
sns.lmplot(x="x", y="y", data=df[df.dataset == 'III'],
robust=True, ci=None, scatter_kws={"s": 80});
Explanation: In the presence of outliers, it can be useful to fit a robust regression, which uses a different loss function to downweight relatively large residuals:
End of explanation |
8,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'bnu-esm-1-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: BNU
Source ID: BNU-ESM-1-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:41
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
8,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
decision trees
Step1: Decision trees are directed graphs beginning with one node and branching to many. They are a hierarchical data structure that represent data by implementing a divide-and-conquer strategy. There are two main types of decision tree
Step2: Now, predict the class of some example collection of features.
Step3: The probability of each class can be predicted too, which is the fraction of training samples of the same class in a leaf.
Step4: We can look at the tree in Graphviz format.
Step5: more detailed example of decision tree classifier using the iris dataset
Get the iris dataset.
Step6: The top bit of the dataset looks like this
Step7: Make a decision tree and then fit it using the features ("data") and class labels ("target") of the iris dataset.
Step8: Ok, let's look at the tree, but we'll fancy it up this time with colors and shit.
Step9: Right, so now let's make some predictions.
Step10: How accurate is it? Well, here is what it should have got
Step11: Boom, it's awesome. Well done, decision tree.
Step12: Aait, let's create and fit a decision tree with a depth of like 2 nodes.
Step13: Ok, let's make some predictions and see how it does.
Step14: Damn, that shit is woke!
Step15: Ok, now let's try a tree with greater depth, like 5 nodes.
Step16: Yeah ok, naw.
It turns out that learning a tree that classifies or models data perfectly may not lead to a tree with good generalization performance. There could be noise in the data (as there was in this example) or the algorithm might be making decisions based on low statistics (very little data). | Python Code:
import graphviz
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import sklearn.datasets
import sklearn.tree
plt.rcParams["figure.figsize"] = [17, 10]
Explanation: decision trees
End of explanation
# features
X = [
[0, 0],
[1, 1]
]
# targets
Y = [
0,
1
]
classifier = sklearn.tree.DecisionTreeClassifier()
classifier = classifier.fit(X, Y)
Explanation: Decision trees are directed graphs beginning with one node and branching to many. They are a hierarchical data structure that represent data by implementing a divide-and-conquer strategy. There are two main types of decision tree: classification and regression, and both are used to make predictions based on data. Classification trees output a discrete category/class/target while regression trees output real values. Regression tree algorithms were introduced in 1963 (reference).
Moving through a decision tree, each node splits up the input data. Each node is a sort of cluster of cases that is to be split by further branches in the tree. Often trees are binary, wherein each node is split into two subsamples, but they don't have to be binary.
So, imagine there are some colored shapes that can be classified as A, B or C.
A classification decision tree for the colored shapes could look like this:
Decision trees can be seen as a compact way to represent a lot of data. A usual goal in defining a decision tree is to search for one that is as small as possible.
super simple example of decision tree classifier
scikit-learn provides a DecisionTreeClassifier. It takes as input two arrays, an array of data features and an array of class labels for each collection of features.
Create some data. There are features and there are classifications for each collection of features.
End of explanation
classifier.predict([[2, 2]])
Explanation: Now, predict the class of some example collection of features.
End of explanation
classifier.predict_proba([[2, 2]])
Explanation: The probability of each class can be predicted too, which is the fraction of training samples of the same class in a leaf.
End of explanation
graph = graphviz.Source(sklearn.tree.export_graphviz(classifier, out_file=None))
graph;
Explanation: We can look at the tree in Graphviz format.
End of explanation
iris = sklearn.datasets.load_iris()
Explanation: more detailed example of decision tree classifier using the iris dataset
Get the iris dataset.
End of explanation
pd.DataFrame(
data = np.c_[iris["data"], iris["target"]],
columns = iris["feature_names"] + ["target"]
).head()
Explanation: The top bit of the dataset looks like this:
End of explanation
classifier = sklearn.tree.DecisionTreeClassifier()
classifier = classifier.fit(iris.data, iris.target)
Explanation: Make a decision tree and then fit it using the features ("data") and class labels ("target") of the iris dataset.
End of explanation
graph = graphviz.Source(
sklearn.tree.export_graphviz(
classifier,
out_file = None,
feature_names = iris.feature_names,
class_names = iris.target_names,
filled = True,
rounded = False,
special_characters = True,
proportion = True,
)
)
graph.render('iris_DT')
graph
sklearn.tree.export_graphviz(
classifier,
out_file = "tree_1.svg",
feature_names = iris.feature_names,
class_names = iris.target_names,
filled = True,
rounded = False,
special_characters = True,
proportion = True,
)
Explanation: Ok, let's look at the tree, but we'll fancy it up this time with colors and shit.
End of explanation
classifier.predict(iris.data)
Explanation: Right, so now let's make some predictions.
End of explanation
iris.target
Explanation: How accurate is it? Well, here is what it should have got:
End of explanation
rng = np.random.RandomState(1)
X = np.sort(5*rng.rand(80, 1), axis=0)
y = np.sin(X).ravel()
y[::5] += 3*(0.5-rng.rand(16))
plt.scatter(X, y, s=30, edgecolor="black", c="red", label="data")
plt.title("a fuck off noisy sine curve")
plt.xlabel("data")
plt.ylabel("target")
plt.show();
Explanation: Boom, it's awesome. Well done, decision tree. :)
decision tree regressor
Now, let's take a glance at a decision tree for regression, or modelling something. Here, let's model a slightly noisy sine curve.
End of explanation
regressor = sklearn.tree.DecisionTreeRegressor(max_depth=2)
regressor.fit(X, y);
Explanation: Aait, let's create and fit a decision tree with a depth of like 2 nodes.
End of explanation
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_prediction = regressor.predict(X_test)
plt.scatter(X, y, s=30, edgecolor="black", c = "red", label="data")
plt.plot(X_test, y_prediction, color="cornflowerblue", label="max_depth = 2", linewidth=2)
plt.title("just fittin' a noisy sine curve, it's fine")
plt.xlabel("data")
plt.ylabel("target")
plt.legend()
plt.show();
Explanation: Ok, let's make some predictions and see how it does.
End of explanation
graph = graphviz.Source(
sklearn.tree.export_graphviz(
regressor,
out_file = None,
filled = True,
rounded = False
)
)
graph;
Explanation: Damn, that shit is woke!
End of explanation
regressor = sklearn.tree.DecisionTreeRegressor(max_depth=5)
regressor.fit(X, y);
X_test = np.arange(0.0, 5.0, 0.01)[:, np.newaxis]
y_prediction = regressor.predict(X_test)
plt.scatter(X, y, s=30, edgecolor="black", c="red", label="data")
plt.plot(X_test, y_prediction, color="cornflowerblue", label="max_depth = 5", linewidth=2)
plt.title("just fittin' a noisy sine curve, but what the Bjork?")
plt.xlabel("data")
plt.ylabel("target")
plt.legend()
plt.show();
Explanation: Ok, now let's try a tree with greater depth, like 5 nodes.
End of explanation
graph = graphviz.Source(
sklearn.tree.export_graphviz(
regressor,
out_file = None,
filled = True,
rounded = False,
special_characters = True,
proportion = True,
)
)
graph.render('iris_DT')
graph
Explanation: Yeah ok, naw.
It turns out that learning a tree that classifies or models data perfectly may not lead to a tree with good generalization performance. There could be noise in the data (as there was in this example) or the algorithm might be making decisions based on low statistics (very little data).
End of explanation |
8,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Network Tour of Data Science
Michaël Defferrard, PhD student, Pierre Vandergheynst, Full Professor, EPFL LTS2.
Assignment 4
Step1: Design a technique to construct smooth scalar signals $x \in \mathbb{R}^N$ over the graph $\mathcal{G}$.
Hint
Step2: 2 Graph Signal Inpainting
Let $y$ be a signal obtained by observing $n$ out the $N$ entries of a smooth signal $x$. Design and implement a procedure to infer the missing values and test its average accuracy $\| x^\ast - x \|_2^2$ as a function of $n/N$ on a test set of signals created using the technique developed above.
First complete the equations below, then do the implementation.
Observation | Python Code:
import numpy as np
import scipy.io
import matplotlib.pyplot as plt
%matplotlib inline
import os.path
X = scipy.io.mmread(os.path.join('datasets', 'graph_inpainting', 'embedding.mtx'))
W = scipy.io.mmread(os.path.join('datasets', 'graph_inpainting', 'graph.mtx'))
N = W.shape[0]
print('N = |V| = {}, k|V| < |E| = {}'.format(N, W.nnz))
plt.spy(W, markersize=2, color='black');
Explanation: A Network Tour of Data Science
Michaël Defferrard, PhD student, Pierre Vandergheynst, Full Professor, EPFL LTS2.
Assignment 4: Transductive Learning using Graphs
Transduction is reasoning from observed, specific (training) cases to specific (test) cases. For this assignment, the task is to infer missing values in some dataset, while the training and testing cases are available to construct a graph. The exercise consists of two parts: (1) construct some artificial data and (2) retrieve the missing values and measure performance.
1 Smooth graph signal
Let $\mathcal{G} = (\mathcal{V}, W)$ be a graph of vertex set $\mathcal{V}$ and weighted adjacency matrix $W$.
End of explanation
# Fourier basis.
D = W.sum(axis=0)
D = scipy.sparse.diags(D.A.squeeze(), 0)
L = D - W
lamb, U = np.linalg.eigh(L.toarray())
# Low-pass filters.
def f1(u, a=4):
y = np.zeros(u.shape)
y[:a] = 1
return y
def f2(u, m=4):
return np.maximum(1 - m * u / u[-1], 0)
def f3(u, a=0.8):
return np.exp(-u / a)
# Random signal.
x = np.random.uniform(-1, 1, size=W.shape[0])
xhat = U.T.dot(x)
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].plot(lamb, xhat, '.-')
ax[0].set_title('Random signal spectrum')
ax[1].scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)
ax[1].set_title('Random signal')
# Smooth signal through filtering.
xhat *= f3(lamb)
x = U.dot(xhat)
M = x.T.dot(L.dot(x))
print('M = x^T L x = {}'.format(M))
fig, ax = plt.subplots(1, 2, figsize=(15, 5))
ax[0].set_title('Smooth signal spectrum')
ax[0].plot(lamb, abs(xhat), '.-', label='spectrum |U^T x|')
#ax[0].plot(lamb, np.sqrt(M/lamb))
ax[0].plot(lamb[1:], np.sqrt(M/lamb[1:]), label='Decay associated with smoothness M')
ax[0].legend()
ax[1].scatter(X[:, 0], X[:, 1], c=x, s=40, linewidths=0)
ax[1].set_title('Smooth signal');
Explanation: Design a technique to construct smooth scalar signals $x \in \mathbb{R}^N$ over the graph $\mathcal{G}$.
Hint:
* This part is related to our last exercise.
* There is multiple ways to do this, another is to filter random signals.
End of explanation
tau = 1e5 # Balance between fidelity and smoothness prior.
num = 100 # Number of signals and masks to generate.
# Percentage of values to keep.
probs = [0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0, 0.1, 0.2, 0.3]
errors = []
for p in probs:
mse = 0
for _ in range(num):
# Smooth signal.
x = np.random.uniform(-1, 1, size=W.shape[0])
xhat = U.T.dot(x) * f3(lamb)
x = U.dot(xhat)
# Observation.
A = np.diag(np.random.uniform(size=N) < p)
y = A.dot(x)
# Reconstruction.
x_sol = np.linalg.solve(tau * A + L, tau * y)
mse += np.linalg.norm(x - x_sol)**2
errors.append(mse / num)
# Show one example.
fig, ax = plt.subplots(1, 3, figsize=(15, 5))
param = dict(s=40, vmin=min(x), vmax=max(x), linewidths=0)
ax[0].scatter(X[:, 0], X[:, 1], c=x, **param)
ax[1].scatter(X[:, 0], X[:, 1], c=y, **param)
ax[2].scatter(X[:, 0], X[:, 1], c=x_sol, **param)
ax[0].set_title('Ground truth')
ax[1].set_title('Observed signal (missing values set to 0)')
ax[2].set_title('Inpainted signal')
print('|x-y|_2^2 = {:5f}'.format(np.linalg.norm(x - y)**2))
print('|x-x*|_2^2 = {:5f}'.format(np.linalg.norm(x - x_sol)**2))
# Show reconstruction error w.r.t. percentage of observed values.
plt.figure(figsize=(15, 5))
plt.semilogy(probs, errors, '.', markersize=10)
plt.xlabel('Percentage of observed values n/N')
plt.ylabel('Reconstruction error |x* - x|_2^2');
Explanation: 2 Graph Signal Inpainting
Let $y$ be a signal obtained by observing $n$ out the $N$ entries of a smooth signal $x$. Design and implement a procedure to infer the missing values and test its average accuracy $\| x^\ast - x \|_2^2$ as a function of $n/N$ on a test set of signals created using the technique developed above.
First complete the equations below, then do the implementation.
Observation:
$$y = Ax$$
where $A$ is a diagonal masking matrix with $\operatorname{diag(A)} \in {0,1}^N$.
Optimization problem:
$$x^\ast = \operatorname{arg } \min_x \frac{\tau}{2} \|Ax - y\|2^2 + \frac12 x^T L x$$
where $\|Ax - y\|_2^2$ is the fidelity term and
$x^T L x = \sum{u \sim v} w(u,v) (x(u) - x(v))^2$ is the smoothness prior.
Optimal solution (by putting the derivative to zero):
$$\tau Ax^\ast - \tau y + L x^\ast = 0
\hspace{0.3cm} \rightarrow \hspace{0.3cm}
x^\ast = (\tau A + L)^{-1} \tau y$$
Hint: in the end the solution should be a linear system of equations, to be solved with np.linalg.solve().
End of explanation |
8,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GeosPy
Step1: Basic Example
The standard use case of GeosPy is trying to find a user's location given various data.
To use GeosPy, the first thing we need to do is see what models are available.
Step2: Now that we know the available models, we'll make the choice to use Backstrom model (see Jurgens et al. (2015) for the pros and cons of various models).
We can then select our model by calling the set_model method.
Step3: Now that we have our GeosPy class instantiated with an appropriate model, we need to gather our data (this is where you come in!) and put it in the appropriate format, a dictionary of objects mapping to location (in latitude and longitude).
In this example we'll use people mapped to their home locations.
Step4: Now this user_location_dict only gives us so much information. To make a meaningful inference, we need to provide the model with another peice of information, a dictionary of objects mapping to a list of other objects, where the objects are contained in the first dictionary.
For this example, we happen to have a dictionary user_friend_dict that maps OffTheGrid to a list of his friends.
Step5: Finally with our instantiated and set class, and our two dictionaries we can run the model! | Python Code:
from GeosPy import Geos
geosPy = Geos()
Explanation: GeosPy: Geolocation Inference Made Easy
GeosPy is a python 3 library written to make geolocation inference easy. Geolocation inference is the identification of the real-world geographic location of an object on Earth based off of available data. GeosPy currently only supports network based inference methods.
End of explanation
print(geosPy.models)
Explanation: Basic Example
The standard use case of GeosPy is trying to find a user's location given various data.
To use GeosPy, the first thing we need to do is see what models are available.
End of explanation
backstrom = geosPy.set_model('backstrom')
Explanation: Now that we know the available models, we'll make the choice to use Backstrom model (see Jurgens et al. (2015) for the pros and cons of various models).
We can then select our model by calling the set_model method.
End of explanation
user_location_dict = {'Tyler': (44, -71.5), 'Tim': (45.5, -73.5), 'Gwyn': (44.5, -89.5),'Conor':(55.0, -106.0),
'Sam': (25.7, -80.2), 'OffTheGrid': None}
Explanation: Now that we have our GeosPy class instantiated with an appropriate model, we need to gather our data (this is where you come in!) and put it in the appropriate format, a dictionary of objects mapping to location (in latitude and longitude).
In this example we'll use people mapped to their home locations.
End of explanation
user_friend_dict = {'OffTheGrid': ['Tyler', 'Sam', 'Gwyn', 'Conor', 'Tim']}
Explanation: Now this user_location_dict only gives us so much information. To make a meaningful inference, we need to provide the model with another peice of information, a dictionary of objects mapping to a list of other objects, where the objects are contained in the first dictionary.
For this example, we happen to have a dictionary user_friend_dict that maps OffTheGrid to a list of his friends.
End of explanation
print(geosPy.locate(user_location_dict, user_friend_dict))
Explanation: Finally with our instantiated and set class, and our two dictionaries we can run the model!
End of explanation |
8,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Validation and Model Selection
Credits
Step1: Validating Models
One of the most important pieces of machine learning is model validation
Step2: Let's fit a K-neighbors classifier
Step3: Now we'll use this classifier to predict labels for the data
Step4: Finally, we can check how well our prediction did
Step5: It seems we have a perfect classifier!
Question
Step6: Now we train on the training data, and validate on the test data
Step7: This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine
Step8: This can also be computed directly from the model.score method
Step9: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors
Step10: We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice
Step11: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help
Step12: K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation
Step13: This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question
Step14: Now let's create a realization of this dataset
Step15: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit
Step16: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this
Step17: Now we'll use this to fit a quadratic curve to the data.
Step18: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
Step19: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively
Step20: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset
Step21: Now let's plot the validation curves
Step22: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data
Step23: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
Step24: Let's see what the learning curves look like for a linear model
Step25: This shows a typical learning curve
Step26: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex? | Python Code:
from __future__ import print_function, division
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
# Use seaborn for plotting defaults
import seaborn as sns; sns.set()
Explanation: Validation and Model Selection
Credits: Forked from PyCon 2015 Scikit-learn Tutorial by Jake VanderPlas
In this section, we'll look at model evaluation and the tuning of hyperparameters, which are parameters that define the model.
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
X = digits.data
y = digits.target
Explanation: Validating Models
One of the most important pieces of machine learning is model validation: that is, checking how well your model fits a given dataset. But there are some pitfalls you need to watch out for.
Consider the digits example we've been looking at previously. How might we check how well our model fits the data?
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
Explanation: Let's fit a K-neighbors classifier
End of explanation
y_pred = knn.predict(X)
Explanation: Now we'll use this classifier to predict labels for the data
End of explanation
print("{0} / {1} correct".format(np.sum(y == y_pred), len(y)))
Explanation: Finally, we can check how well our prediction did:
End of explanation
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
X_train.shape, X_test.shape
Explanation: It seems we have a perfect classifier!
Question: what's wrong with this?
Validation Sets
Above we made the mistake of testing our data on the same set of data that was used for training. This is not generally a good idea. If we optimize our estimator this way, we will tend to over-fit the data: that is, we learn the noise.
A better way to test a model is to use a hold-out set which doesn't enter the training. We've seen this before using scikit-learn's train/test split utility:
End of explanation
knn = KNeighborsClassifier(n_neighbors=1)
knn.fit(X_train, y_train)
y_pred = knn.predict(X_test)
print("{0} / {1} correct".format(np.sum(y_test == y_pred), len(y_test)))
Explanation: Now we train on the training data, and validate on the test data:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_pred)
Explanation: This gives us a more reliable estimate of how our model is doing.
The metric we're using here, comparing the number of matches to the total number of samples, is known as the accuracy score, and can be computed using the following routine:
End of explanation
knn.score(X_test, y_test)
Explanation: This can also be computed directly from the model.score method:
End of explanation
for n_neighbors in [1, 5, 10, 20, 30]:
knn = KNeighborsClassifier(n_neighbors)
knn.fit(X_train, y_train)
print(n_neighbors, knn.score(X_test, y_test))
Explanation: Using this, we can ask how this changes as we change the model parameters, in this case the number of neighbors:
End of explanation
X1, X2, y1, y2 = train_test_split(X, y, test_size=0.5, random_state=0)
X1.shape, X2.shape
print(KNeighborsClassifier(1).fit(X2, y2).score(X1, y1))
print(KNeighborsClassifier(1).fit(X1, y1).score(X2, y2))
Explanation: We see that in this case, a small number of neighbors seems to be the best option.
Cross-Validation
One problem with validation sets is that you "lose" some of the data. Above, we've only used 3/4 of the data for the training, and used 1/4 for the validation. Another option is to use 2-fold cross-validation, where we split the sample in half and perform the validation twice:
End of explanation
from sklearn.cross_validation import cross_val_score
cv = cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
cv.mean()
Explanation: Thus a two-fold cross-validation gives us two estimates of the score for that parameter.
Because this is a bit of a pain to do by hand, scikit-learn has a utility routine to help:
End of explanation
cross_val_score(KNeighborsClassifier(1), X, y, cv=10)
Explanation: K-fold Cross-Validation
Here we've used 2-fold cross-validation. This is just one specialization of $K$-fold cross-validation, where we split the data into $K$ chunks and perform $K$ fits, where each chunk gets a turn as the validation set.
We can do this by changing the cv parameter above. Let's do 10-fold cross-validation:
End of explanation
def test_func(x, err=0.5):
y = 10 - 1. / (x + 0.1)
if err > 0:
y = np.random.normal(y, err)
return y
Explanation: This gives us an even better idea of how well our model is doing.
Overfitting, Underfitting and Model Selection
Now that we've gone over the basics of validation, and cross-validation, it's time to go into even more depth regarding model selection.
The issues associated with validation and
cross-validation are some of the most important
aspects of the practice of machine learning. Selecting the optimal model
for your data is vital, and is a piece of the problem that is not often
appreciated by machine learning practitioners.
Of core importance is the following question:
If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a
more complicated model will give worse results. Also, Sometimes adding
training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine
learning practitioners from the unsuccessful.
Illustration of the Bias-Variance Tradeoff
For this section, we'll work with a simple 1D regression problem. This will help us to
easily visualize the data and the model, and the results generalize easily to higher-dimensional
datasets. We'll explore a simple linear regression problem.
This can be accomplished within scikit-learn with the sklearn.linear_model module.
We'll create a simple nonlinear function that we'd like to fit
End of explanation
def make_data(N=40, error=1.0, random_seed=1):
# randomly sample the data
np.random.seed(1)
X = np.random.random(N)[:, np.newaxis]
y = test_func(X.ravel(), error)
return X, y
X, y = make_data(40, error=1)
plt.scatter(X.ravel(), y);
Explanation: Now let's create a realization of this dataset:
End of explanation
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error
model = LinearRegression()
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
Explanation: Now say we want to perform a regression on this data. Let's use the built-in linear regression function to compute a fit:
End of explanation
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
Explanation: We have fit a straight line to the data, but clearly this model is not a good choice. We say that this model is biased, or that it under-fits the data.
Let's try to improve this by creating a more complicated model. We can do this by adding degrees of freedom, and computing a polynomial regression over the inputs. Scikit-learn makes this easy with the PolynomialFeatures preprocessor, which can be pipelined with a linear regression.
Let's make a convenience routine to do this:
End of explanation
model = PolynomialRegression(2)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)));
Explanation: Now we'll use this to fit a quadratic curve to the data.
End of explanation
model = PolynomialRegression(30)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.title("mean squared error: {0:.3g}".format(mean_squared_error(model.predict(X), y)))
plt.ylim(-4, 14);
Explanation: This reduces the mean squared error, and makes a much better fit. What happens if we use an even higher-degree polynomial?
End of explanation
from IPython.html.widgets import interact
def plot_fit(degree=1, Npts=50):
X, y = make_data(Npts, error=1)
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
model = PolynomialRegression(degree=degree)
model.fit(X, y)
y_test = model.predict(X_test)
plt.scatter(X.ravel(), y)
plt.plot(X_test.ravel(), y_test)
plt.ylim(-4, 14)
plt.title("mean squared error: {0:.2f}".format(mean_squared_error(model.predict(X), y)))
interact(plot_fit, degree=[1, 30], Npts=[2, 100]);
Explanation: When we increase the degree to this extent, it's clear that the resulting fit is no longer reflecting the true underlying distribution, but is more sensitive to the noise in the training data. For this reason, we call it a high-variance model, and we say that it over-fits the data.
Just for fun, let's use IPython's interact capability (only in IPython 2.0+) to explore this interactively:
End of explanation
X, y = make_data(120, error=1.0)
plt.scatter(X, y);
from sklearn.learning_curve import validation_curve
def rms_error(model, X, y):
y_pred = model.predict(X)
return np.sqrt(np.mean((y - y_pred) ** 2))
degree = np.arange(0, 18)
val_train, val_test = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7,
scoring=rms_error)
Explanation: Detecting Over-fitting with Validation Curves
Clearly, computing the error on the training data is not enough (we saw this previously). As above, we can use cross-validation to get a better handle on how the model fit is working.
Let's do this here, again using the validation_curve utility. To make things more clear, we'll use a slightly larger dataset:
End of explanation
def plot_with_err(x, data, **kwargs):
mu, std = data.mean(1), data.std(1)
lines = plt.plot(x, mu, '-', **kwargs)
plt.fill_between(x, mu - std, mu + std, edgecolor='none',
facecolor=lines[0].get_color(), alpha=0.2)
plot_with_err(degree, val_train, label='training scores')
plot_with_err(degree, val_test, label='validation scores')
plt.xlabel('degree'); plt.ylabel('rms error')
plt.legend();
Explanation: Now let's plot the validation curves:
End of explanation
model = PolynomialRegression(4).fit(X, y)
plt.scatter(X, y)
plt.plot(X_test, model.predict(X_test));
Explanation: Notice the trend here, which is common for this type of plot.
For a small model complexity, the training error and validation error are very similar. This indicates that the model is under-fitting the data: it doesn't have enough complexity to represent the data. Another way of putting it is that this is a high-bias model.
As the model complexity grows, the training and validation scores diverge. This indicates that the model is over-fitting the data: it has so much flexibility, that it fits the noise rather than the underlying trend. Another way of putting it is that this is a high-variance model.
Note that the training score (nearly) always improves with model complexity. This is because a more complicated model can fit the noise better, so the model improves. The validation data generally has a sweet spot, which here is around 5 terms.
Here's our best-fit model according to the cross-validation:
End of explanation
from sklearn.learning_curve import learning_curve
def plot_learning_curve(degree=3):
train_sizes = np.linspace(0.05, 1, 20)
N_train, val_train, val_test = learning_curve(PolynomialRegression(degree),
X, y, train_sizes, cv=5,
scoring=rms_error)
plot_with_err(N_train, val_train, label='training scores')
plot_with_err(N_train, val_test, label='validation scores')
plt.xlabel('Training Set Size'); plt.ylabel('rms error')
plt.ylim(0, 3)
plt.xlim(5, 80)
plt.legend()
Explanation: Detecting Data Sufficiency with Learning Curves
As you might guess, the exact turning-point of the tradeoff between bias and variance is highly dependent on the number of training points used. Here we'll illustrate the use of learning curves, which display this property.
The idea is to plot the mean-squared-error for the training and test set as a function of Number of Training Points
End of explanation
plot_learning_curve(1)
Explanation: Let's see what the learning curves look like for a linear model:
End of explanation
plot_learning_curve(3)
Explanation: This shows a typical learning curve: for very few training points, there is a large separation between the training and test error, which indicates over-fitting. Given the same model, for a large number of training points, the training and testing errors converge, which indicates potential under-fitting.
As you add more data points, the training error will never increase, and the testing error will never decrease (why do you think this is?)
It is easy to see that, in this plot, if you'd like to reduce the MSE down to the nominal value of 1.0 (which is the magnitude of the scatter we put in when constructing the data), then adding more samples will never get you there. For $d=1$, the two curves have converged and cannot move lower. What about for a larger value of $d$?
End of explanation
plot_learning_curve(10)
Explanation: Here we see that by adding more model complexity, we've managed to lower the level of convergence to an rms error of 1.0!
What if we get even more complex?
End of explanation |
8,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Integration Exercise 2
Imports
Step1: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps
Step2: Integral 1
Here is an integral from the hyperbolic subsection
Step3: Integral 2
Here is an integral from the exponential functions subsection
Step4: Integral 3
Here is an integral from the trigonometric functions subsection
Step5: Integral 4
Here is an integral from the logarithmic functions subsection
Step6: Integral 5
Here is an integral from the rational and irrational functions subsection | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
Explanation: Integration Exercise 2
Imports
End of explanation
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
Explanation: Indefinite integrals
Here is a table of definite integrals. Many of these integrals has a number of parameters $a$, $b$, etc.
Find five of these integrals and perform the following steps:
Typeset the integral using LateX in a Markdown cell.
Define an integrand function that computes the value of the integrand.
Define an integral_approx funciton that uses scipy.integrate.quad to peform the integral.
Define an integral_exact function that computes the exact value of the integral.
Call and print the return value of integral_approx and integral_exact for one set of parameters.
Here is an example to show what your solutions should look like:
Example
Here is the integral I am performing:
$$ I_1 = \int_0^\infty \frac{dx}{x^2 + a^2} = \frac{\pi}{2a} $$
End of explanation
def integrand(x,a,b):
return np.sin(a*x)/np.sinh(b*x)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integrate_exact(a,b):
return np.pi/(2*b)*np.tanh(a*np.pi/(2*b))
print('Numerical:', integrate_approx(1.0,2.0))
print('Exact:', integrate_exact(1.0,2.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 1
Here is an integral from the hyperbolic subsection:
\begin{equation}
\int_{0}^{\infty} \frac{\sin ax}{\sinh bx} dx = \frac{\pi}{2b}\tanh \frac{a\pi}{2b}
\end{equation}
End of explanation
def integrand(x,a,b):
return np.exp(-a*x)*np.cos(b*x)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integrate_exact(a,b):
return a/(a**2+b**2)
print('Numerical:', integrate_approx(1.0,2.0))
print('Exact:', integrate_exact(1.0,2.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 2
Here is an integral from the exponential functions subsection:
\begin{equation}
\int_{0}^{\infty} e^{-ax} \cos bx \space dx = \frac{a}{a^{2}+b^{2}}
\end{equation}
End of explanation
def integrand(x,p):
return (1-np.cos(p*x))/x**2
def integrate_approx(p):
I,e=integrate.quad(integrand,0,np.inf, args=(p))
return I
def integrate_exact(p):
return p*np.pi/2
print('Numerical:', integrate_approx(4.0))
print('Exact:', integrate_exact(4.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 3
Here is an integral from the trigonometric functions subsection:
\begin{equation}
\int_{0}^{\infty} \frac{1-cospx}{x^{2}} dx = \frac{\pi p}{2}
\end{equation}
End of explanation
def integrand(x,a,b):
return np.log(a**2+x**2)/(b**2+x**2)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integrate_exact(a,b):
return np.pi/b*np.log(a+b)
print('Numerical:', integrate_approx(3.0,4.0))
print('Exact:', integrate_exact(3.0,4.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 4
Here is an integral from the logarithmic functions subsection:
\begin{equation}
\int_{0}^{\infty} \frac{\ln (a^{2}+x^{2})}{b^{2}+x^{2}} dx = \frac{\pi}{b}ln(a+b) \space \space a,b>0
\end{equation}
End of explanation
def integrand(x,a,b):
return np.sqrt(a**2-x**2)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,a, args=(a,b))
return I
def integrate_exact(a,b):
return np.pi*a**2/4
print('Numerical:', integrate_approx(1.0,2.0))
print('Exact:', integrate_exact(1.0,2.0))
assert True # leave this cell to grade the above integral
Explanation: Integral 5
Here is an integral from the rational and irrational functions subsection:
\begin{equation}
\int_{0}^{a} \sqrt{a^{2}-x^{2}} dx = \frac{\pi a^{2}}{4}
\end{equation}
End of explanation |
8,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: IPSL
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:45
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
8,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Topic Networks
In this notebook, we will learn how to visualize topic model using network graphs. Networks can be a great way to explore topic models. We can use it to navigate that how topics belonging to one context may relate to some topics in other context and discover common factors between them. We can use them to find communities of similar topics and pinpoint the most influential topic that has large no. of connections or perform any number of other workflows designed for network analysis.
Step1: Train Model
We'll use the fake news dataset from kaggle for this notebook. First step is to preprocess the data and train our topic model using LDA. You can refer to this notebook also for tips and suggestions of pre-processing the text data, and how to train LDA model for getting good results.
Step2: Visualize topic network
Firstly, a distance matrix is calculated to store distance between every topic pair. The nodes of the network graph will represent topics and the edges between them will be created based on the distance between two connecting nodes/topics.
Step3: To draw the edges, we can use different types of distance metrics available in gensim for calculating the distance between every topic pair. Next, we'd have to define a threshold of distance value such that the topic-pairs with distance above that does not get connected.
Step4: Now that we have our edges, let's plot the annotated network graph. On hovering over the nodes, we'll see the topic_id along with it's top words and on hovering over the edges, we'll see the intersecting/different words of the two topics that it connects.
Step5: For the above graph, we just used the 20th percentile of all the distance values. But we can experiment with few different values also such that the graph doesn’t become too crowded or too sparse and we could get an optimum amount of information about similar topics or any interesting relations b/w different topics.
Or we can also get an idea of threshold from the dendrogram (with ‘single’ linkage function). You can refer to this notebook for more details on topic dendrogram visualization. The y-values in the dendrogram represent the metric distances and if we choose a certain y-value then only those topics which are clustered below it would be connected. So let's plot the dendrogram now to see the sequential clustering process with increasing distance values. | Python Code:
!pip install plotly>=2.0.16 # 2.0.16 need for support 'hovertext' argument from create_dendrogram function
from gensim.models.ldamodel import LdaModel
from gensim.corpora import Dictionary
import pandas as pd
import re
from gensim.parsing.preprocessing import remove_stopwords, strip_punctuation
import numpy as np
Explanation: Topic Networks
In this notebook, we will learn how to visualize topic model using network graphs. Networks can be a great way to explore topic models. We can use it to navigate that how topics belonging to one context may relate to some topics in other context and discover common factors between them. We can use them to find communities of similar topics and pinpoint the most influential topic that has large no. of connections or perform any number of other workflows designed for network analysis.
End of explanation
!wget https://www.kaggle.com/mrisdal/fake-news/downloads/fake-news.zip/1 -O fake.news.zip
!unzip fake.news.zip
df_fake = pd.read_csv('fake.csv')
df_fake[['title', 'text', 'language']].head()
df_fake = df_fake.loc[(pd.notnull(df_fake.text)) & (df_fake.language=='english')]
# remove stopwords and punctuations
def preprocess(row):
return strip_punctuation(remove_stopwords(row.lower()))
df_fake['text'] = df_fake['text'].apply(preprocess)
# Convert data to required input format by LDA
texts = []
for line in df_fake.text:
lowered = line.lower()
words = re.findall(r'\w+', lowered, flags=re.UNICODE|re.LOCALE)
texts.append(words)
# Create a dictionary representation of the documents.
dictionary = Dictionary(texts)
# Filter out words that occur less than 2 documents, or more than 30% of the documents.
dictionary.filter_extremes(no_below=2, no_above=0.4)
# Bag-of-words representation of the documents.
corpus_fake = [dictionary.doc2bow(text) for text in texts]
lda_fake = LdaModel(corpus=corpus_fake, id2word=dictionary, num_topics=35, chunksize=1500, iterations=200, alpha='auto')
lda_fake.save('lda_35')
lda_fake = LdaModel.load('lda_35')
Explanation: Train Model
We'll use the fake news dataset from kaggle for this notebook. First step is to preprocess the data and train our topic model using LDA. You can refer to this notebook also for tips and suggestions of pre-processing the text data, and how to train LDA model for getting good results.
End of explanation
# get topic distributions
topic_dist = lda_fake.state.get_lambda()
# get topic terms
num_words = 50
topic_terms = [{w for (w, _) in lda_fake.show_topic(topic, topn=num_words)} for topic in range(topic_dist.shape[0])]
Explanation: Visualize topic network
Firstly, a distance matrix is calculated to store distance between every topic pair. The nodes of the network graph will represent topics and the edges between them will be created based on the distance between two connecting nodes/topics.
End of explanation
from scipy.spatial.distance import pdist, squareform
from gensim.matutils import jensen_shannon
import networkx as nx
import itertools as itt
# calculate distance matrix using the input distance metric
def distance(X, dist_metric):
return squareform(pdist(X, lambda u, v: dist_metric(u, v)))
topic_distance = distance(topic_dist, jensen_shannon)
# store edges b/w every topic pair along with their distance
edges = [(i, j, {'weight': topic_distance[i, j]})
for i, j in itt.combinations(range(topic_dist.shape[0]), 2)]
# keep edges with distance below the threshold value
k = np.percentile(np.array([e[2]['weight'] for e in edges]), 20)
edges = [e for e in edges if e[2]['weight'] < k]
Explanation: To draw the edges, we can use different types of distance metrics available in gensim for calculating the distance between every topic pair. Next, we'd have to define a threshold of distance value such that the topic-pairs with distance above that does not get connected.
End of explanation
import plotly.offline as py
from plotly.graph_objs import *
import plotly.figure_factory as ff
py.init_notebook_mode()
# add nodes and edges to graph layout
G = nx.Graph()
G.add_nodes_from(range(topic_dist.shape[0]))
G.add_edges_from(edges)
graph_pos = nx.spring_layout(G)
# initialize traces for drawing nodes and edges
node_trace = Scatter(
x=[],
y=[],
text=[],
mode='markers',
hoverinfo='text',
marker=Marker(
showscale=True,
colorscale='YIGnBu',
reversescale=True,
color=[],
size=10,
colorbar=dict(
thickness=15,
xanchor='left'
),
line=dict(width=2)))
edge_trace = Scatter(
x=[],
y=[],
text=[],
line=Line(width=0.5, color='#888'),
hoverinfo='text',
mode='lines')
# no. of terms to display in annotation
n_ann_terms = 10
# add edge trace with annotations
for edge in G.edges():
x0, y0 = graph_pos[edge[0]]
x1, y1 = graph_pos[edge[1]]
pos_tokens = topic_terms[edge[0]] & topic_terms[edge[1]]
neg_tokens = topic_terms[edge[0]].symmetric_difference(topic_terms[edge[1]])
pos_tokens = list(pos_tokens)[:min(len(pos_tokens), n_ann_terms)]
neg_tokens = list(neg_tokens)[:min(len(neg_tokens), n_ann_terms)]
annotation = "<br>".join((": ".join(("+++", str(pos_tokens))), ": ".join(("---", str(neg_tokens)))))
x_trace = list(np.linspace(x0, x1, 10))
y_trace = list(np.linspace(y0, y1, 10))
text_annotation = [annotation] * 10
x_trace.append(None)
y_trace.append(None)
text_annotation.append(None)
edge_trace['x'] += x_trace
edge_trace['y'] += y_trace
edge_trace['text'] += text_annotation
# add node trace with annotations
for node in G.nodes():
x, y = graph_pos[node]
node_trace['x'].append(x)
node_trace['y'].append(y)
node_info = ''.join((str(node+1), ': ', str(list(topic_terms[node])[:n_ann_terms])))
node_trace['text'].append(node_info)
# color node according to no. of connections
for node, adjacencies in enumerate(G.adjacency()):
node_trace['marker']['color'].append(len(adjacencies))
fig = Figure(data=Data([edge_trace, node_trace]),
layout=Layout(showlegend=False,
hovermode='closest',
xaxis=XAxis(showgrid=True, zeroline=False, showticklabels=True),
yaxis=YAxis(showgrid=True, zeroline=False, showticklabels=True)))
py.iplot(fig)
Explanation: Now that we have our edges, let's plot the annotated network graph. On hovering over the nodes, we'll see the topic_id along with it's top words and on hovering over the edges, we'll see the intersecting/different words of the two topics that it connects.
End of explanation
from gensim.matutils import jensen_shannon
import scipy as scp
from scipy.cluster import hierarchy as sch
from scipy import spatial as scs
# get topic distributions
topic_dist = lda_fake.state.get_lambda()
# get topic terms
num_words = 300
topic_terms = [{w for (w, _) in lda_fake.show_topic(topic, topn=num_words)} for topic in range(topic_dist.shape[0])]
# no. of terms to display in annotation
n_ann_terms = 10
# use Jenson-Shannon distance metric in dendrogram
def js_dist(X):
return pdist(X, lambda u, v: jensen_shannon(u, v))
# define method for distance calculation in clusters
linkagefun=lambda x: sch.linkage(x, 'single')
# calculate text annotations
def text_annotation(topic_dist, topic_terms, n_ann_terms, linkagefun):
# get dendrogram hierarchy data
d = js_dist(topic_dist)
Z = linkagefun(d)
P = sch.dendrogram(Z, orientation="bottom", no_plot=True)
# store topic no.(leaves) corresponding to the x-ticks in dendrogram
x_ticks = np.arange(5, len(P['leaves']) * 10 + 5, 10)
x_topic = dict(zip(P['leaves'], x_ticks))
# store {topic no.:topic terms}
topic_vals = dict()
for key, val in x_topic.items():
topic_vals[val] = (topic_terms[key], topic_terms[key])
text_annotations = []
# loop through every trace (scatter plot) in dendrogram
for trace in P['icoord']:
fst_topic = topic_vals[trace[0]]
scnd_topic = topic_vals[trace[2]]
# annotation for two ends of current trace
pos_tokens_t1 = list(fst_topic[0])[:min(len(fst_topic[0]), n_ann_terms)]
neg_tokens_t1 = list(fst_topic[1])[:min(len(fst_topic[1]), n_ann_terms)]
pos_tokens_t4 = list(scnd_topic[0])[:min(len(scnd_topic[0]), n_ann_terms)]
neg_tokens_t4 = list(scnd_topic[1])[:min(len(scnd_topic[1]), n_ann_terms)]
t1 = "<br>".join((": ".join(("+++", str(pos_tokens_t1))), ": ".join(("---", str(neg_tokens_t1)))))
t2 = t3 = ()
t4 = "<br>".join((": ".join(("+++", str(pos_tokens_t4))), ": ".join(("---", str(neg_tokens_t4)))))
# show topic terms in leaves
if trace[0] in x_ticks:
t1 = str(list(topic_vals[trace[0]][0])[:n_ann_terms])
if trace[2] in x_ticks:
t4 = str(list(topic_vals[trace[2]][0])[:n_ann_terms])
text_annotations.append([t1, t2, t3, t4])
# calculate intersecting/diff for upper level
intersecting = fst_topic[0] & scnd_topic[0]
different = fst_topic[0].symmetric_difference(scnd_topic[0])
center = (trace[0] + trace[2]) / 2
topic_vals[center] = (intersecting, different)
# remove trace value after it is annotated
topic_vals.pop(trace[0], None)
topic_vals.pop(trace[2], None)
return text_annotations
# get text annotations
annotation = text_annotation(topic_dist, topic_terms, n_ann_terms, linkagefun)
# Plot dendrogram
dendro = ff.create_dendrogram(topic_dist, distfun=js_dist, labels=range(1, 36), linkagefun=linkagefun, hovertext=annotation)
dendro['layout'].update({'width': 1000, 'height': 600})
py.iplot(dendro)
Explanation: For the above graph, we just used the 20th percentile of all the distance values. But we can experiment with few different values also such that the graph doesn’t become too crowded or too sparse and we could get an optimum amount of information about similar topics or any interesting relations b/w different topics.
Or we can also get an idea of threshold from the dendrogram (with ‘single’ linkage function). You can refer to this notebook for more details on topic dendrogram visualization. The y-values in the dendrogram represent the metric distances and if we choose a certain y-value then only those topics which are clustered below it would be connected. So let's plot the dendrogram now to see the sequential clustering process with increasing distance values.
End of explanation |
8,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
← Back to Index
Symbolic Representations
Symbolic music representations comprise any kind of score representation with an explicit encoding of notes or other musical events. These include machine-readable data formats such as MIDI. Any kind of digital data format may be regarded as symbolic since it is based on a finite alphabet of letters or symbols.
Piano-Roll Representations
Around the late 19th and early 20th centuries, self-playing pianos called player pianos became popular. The input for these pianos is a continuous roll of paper with holes punched into it. This paper roll is called a piano roll. Performances by famous musicians such as Gustav Mahler, Edvard Grieg, Scott Joplin and George Gershwin have been recorded onto piano rolls.
Similar to the player piano, the pianola is not a player piano but it sits in front of a piano. Here is a pianola in action
Step1: Today, a piano-roll representation generically refers to any visualization of note information resembling a piano roll. See below for examples of piano-roll representations by Stephen Malinowski. Here, the horizontal axis represents time, and the vertical axis represents pitch. | Python Code:
ipd.display( ipd.YouTubeVideo("2A6ZXZwl3nA", start=106) )
Explanation: ← Back to Index
Symbolic Representations
Symbolic music representations comprise any kind of score representation with an explicit encoding of notes or other musical events. These include machine-readable data formats such as MIDI. Any kind of digital data format may be regarded as symbolic since it is based on a finite alphabet of letters or symbols.
Piano-Roll Representations
Around the late 19th and early 20th centuries, self-playing pianos called player pianos became popular. The input for these pianos is a continuous roll of paper with holes punched into it. This paper roll is called a piano roll. Performances by famous musicians such as Gustav Mahler, Edvard Grieg, Scott Joplin and George Gershwin have been recorded onto piano rolls.
Similar to the player piano, the pianola is not a player piano but it sits in front of a piano. Here is a pianola in action:
End of explanation
ipd.display( ipd.YouTubeVideo("LlvUepMa31o", start=15) )
ipd.display( ipd.YouTubeVideo("Kri2jWr08S4", start=11) )
Explanation: Today, a piano-roll representation generically refers to any visualization of note information resembling a piano roll. See below for examples of piano-roll representations by Stephen Malinowski. Here, the horizontal axis represents time, and the vertical axis represents pitch.
End of explanation |
8,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Téléchargement des données et premier traitement
Step1: Météo Darksky
J'utilise l'API de DarkSky (https
Step2: Ce sont des données heure par heure. Pour avoir une meillieur précision avec les données mesurées et le flux solaire, on augmente artificiellment la fréquence à 15min
Step3: Irradiation solaire
Routine permettant d'obtenir
Step4: Projection sur les surfaces vitrées
Step5: Rq
Step6: Température intérieure mesurée
Téléchargement depuis EmonCMS
Step7: Enregistrement du Dataframe avec Pickle
Step8: Graph | Python Code:
coords_grenoble = (45.1973288, 5.7139923)
startday = pd.to_datetime('12/07/2017', format='%d/%m/%Y').tz_localize('Europe/Paris')
lastday = pd.to_datetime('now').tz_localize('Europe/Paris')
Explanation: Téléchargement des données et premier traitement
End of explanation
# routine pour construire automatiquement un dataframe pandas:
# télécharge les données en ligne
weatherdata = wf.buildmultidayDF(startday, lastday, coords_grenoble )
Explanation: Météo Darksky
J'utilise l'API de DarkSky (https://darksky.net).
End of explanation
weatherdata = weatherdata.resample('15min').interpolate('linear')
weatherdata['temperature'].plot(figsize=(14, 3), color='r' ); plt.ylabel('°C');
Explanation: Ce sont des données heure par heure. Pour avoir une meillieur précision avec les données mesurées et le flux solaire, on augmente artificiellment la fréquence à 15min:
End of explanation
sundata = sun.buildmultidayDF( coords_grenoble, weatherdata.index, cloudCover = weatherdata['cloudCover'] )
sundata['I0'].plot(figsize=(14, 3)); plt.ylabel('W/m2');
Explanation: Irradiation solaire
Routine permettant d'obtenir :
Le flux (W/m2) sur une surface perpendiculaire au rayon du soleil, en prennant en compte la couverture nuageuse.
La position du soleil (altitude et azimuth) pour projeter sur les vitres ensuite.
Il n'y a pas la radiation diffuse, mais prend en compte la couverture nuageuse ( $ 0.75*c^3.4 $ ... )
End of explanation
# surface (m2), sigma (deg), azimuth (deg)
windows = { 'bastille':(1.2*0.8, 37, 50) ,
'cuisine':(0.3*0.72 *2, 90, 50 ),
'chambre':(0.3*0.72 *2, 90, 180+50),
'vercors':(0.6*0.8 * 2, 37, 180+50) }
sunFlux_dict = {}
for k, values in windows.items():
sunFlux_dict['flux_'+k] = values[0] * sun.projectDF( values[1], values[2], sundata )
flux_tot = pd.DataFrame( sunFlux_dict )
flux_tot.plot(figsize=(14, 3)); plt.ylabel('W');
# Somme
weatherdata['flux_tot'] = flux_tot.sum(axis=1)
weatherdata['flux_tot'].plot( figsize=(14, 3) ); plt.ylabel('W');
Explanation: Projection sur les surfaces vitrées
End of explanation
# Vitesse du vent
weatherdata['windSpeed'].plot(figsize=(14, 2)); plt.ylabel('m/s');
# Pluie
weatherdata['precipIntensity'].plot(figsize=(14, 2)); plt.ylabel('mm / h');
Explanation: Rq: C'est le flux 'brut' reçu par les surfaces. Il faut ensuite prendre en compte la réflexion, l'absorption et la ré-émission ... ceci étant décrit par un facteur multiplicatif entre 0 et 1, noté facteur_g. wiipedia
Au minimum, on a $facteur_g = 0.76$ pour un double vitrage. Il y a aussi l'absorption par le cadre (et les rideaux) et la transmision par conduction, et re-émissions.
End of explanation
dataframefreq = '15min'
feeds = { 'T_int': 3 }
Tmesure = getfeeds.builddataframe( feeds, dataframefreq , startdate=startday )
Tmesure.plot( figsize=(14, 3) ); plt.ylabel('°C');
# Remove some data
mask_start = pd.to_datetime( '28/06/2017 22:00' ).tz_localize('Europe/Paris')
mask_end = pd.to_datetime( '29/06/2017 10:00' ) .tz_localize('Europe/Paris')
mask = (Tmesure.index > mask_start) & (Tmesure.index < mask_end )
Tmesure['T_int'].loc[mask] = np.nan
# Resample
Tmesure = Tmesure.resample('15min').mean()
Tmesure.plot( figsize=(14, 3) ); plt.ylabel('°C');
# Merge
weatherdata['T_int'] = Tmesure['T_int']
Explanation: Température intérieure mesurée
Téléchargement depuis EmonCMS
End of explanation
weatherdata.to_pickle( 'weatherdata.pck' )
Explanation: Enregistrement du Dataframe avec Pickle
End of explanation
plt.figure( figsize=(14, 5) )
plt.subplot( 2, 1, 1 )
plt.plot(weatherdata['T_int'] , ':k')
plt.plot(weatherdata['temperature'], alpha=0.4);
plt.subplot( 2, 1, 2 )
plt.plot(weatherdata['flux_tot'] , 'r');
Explanation: Graph
End of explanation |
8,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Wrangling - Adding Latitudes and Longitudes using Google Maps Geocoding API, to create some neat visualisations
Most Data Scientists will tell you that they spend most of their time Data Wrangling.
<b><u>Data Wrangling</u>
Step1: Declaring the important helper functions and global variables
You have to set up your <b>Geocoding API Key</b> before you proceed.
client_key
Step2: Connecting to the Mongo DB client running on the same machine.
Must change if the Mongo DB is running on a separate machine. Check MongoDB docs
Step3: Helper Function to send request (to url- address_field) and append to the MongoDB collection
Step4: Extracting latitude and longitude data of the aircrash location, and appending to the MongoDB collection
There are a lot of try and except blocks as the location strings do not follow a nice format.
Some special cases have to be handled,
<b>Example
Step5: Extracting latitude and longitude data of the source and destination, and appending to the MongoDB collection
There are a few try and except blocks as the location strings do not follow a nice format.
Some special cases have to be handled,
Open to change. Please email me at [email protected] if you can think of a more elegant solution to handle special cases with the geocoding api
After running the code block below, the following fields should've be added to MongoDB Collection
Step6: Code for churning out XML files, that are used for visualisation purposes on the web app.
Please visit our website to view the final product (www.sykdesigns.com/GE2324)
Basically, the XML file(s) created using the code below were used for creating some neat looking Google Maps visualisations of the aircrashes/accidents.
Step7: Generating XML File with the following Schema
Step8: This is what the XML should look like | Python Code:
__author__ = 'shivam_gaur'
import requests
from bs4 import BeautifulSoup
import re
from pymongo import MongoClient
Explanation: Data Wrangling - Adding Latitudes and Longitudes using Google Maps Geocoding API, to create some neat visualisations
Most Data Scientists will tell you that they spend most of their time Data Wrangling.
<b><u>Data Wrangling</u>:</b> When the data collected from a source is not sufficient enough to provide valuable insights, then data wrangling must be performed. Data Wrangling refers to the process of adding supplementary data to the existing dataset.
Adding latitude and longitude data for each air crash
We have crawled the following 'fields' for each air crash:
<table width="318">
<tbody>
<tr>
<td colspan="2" width="318">
<p><strong>Fields for each crash</strong></p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Date:</strong></p>
</td>
<td width="264">
<p><strong> </strong>Date of accident, in the format - January 01, 2001</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Time:</strong></p>
</td>
<td width="264">
<p><strong> </strong>Local time, in 24 hr. format unless otherwise specified</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Location:</strong></p>
</td>
<td width="264">
<p><strong> </strong>Location of accident</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Airline/Op:</strong></p>
</td>
<td width="264">
<p> Airline or operator of the aircraft</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Flight #:</strong></p>
</td>
<td width="264">
<p> Flight number assigned by the aircraft operator</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Route:</strong></p>
</td>
<td width="264">
<p> Complete or partial route flown prior to the accident</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>AC Type:</strong></p>
</td>
<td width="264">
<p> Aircraft type</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Reg</strong><strong>:</strong></p>
</td>
<td width="264">
<p> ICAO registration of the aircraft</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>cn</strong><strong> / ln:</strong></p>
</td>
<td width="264">
<p> Construction or serial number / Line or fuselage number</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Aboard:</strong></p>
</td>
<td width="264">
<p> Total aboard (passengers / crew)</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Fatalities:</strong></p>
</td>
<td width="264">
<p> Total fatalities aboard (passengers / crew)</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Ground:</strong></p>
</td>
<td width="264">
<p> Total killed on the ground</p>
</td>
</tr>
<tr>
<td width="55">
<p><strong>Summary:</strong></p>
</td>
<td width="264">
<p> Brief description of the accident and cause if known</p>
</td>
</tr>
</tbody>
</table>
We can add the following fields by using the existing fields and the Google Geocoding API.
For the following two fields, we can use the 'location' field of the dataset
* <b>geolat:</b> the latitude of the aircrash site
* <b>geolong:</b> the longitude of the aircrash site
For the following four fields, we can use the 'Route' field of the dataset. It is of the format <-source->-<-stop1->-<-stop2->-<-destination->
* <b>src_lat:</b> the latitude of the starting point (source) of the aircraft
* <b>src_long:</b> the longitude of the starting point (source) of the aircraft
* <b>dest_lat:</b> the latitude of the crash location ("destination") of the aircraft
* <b>dest_long:</b> the longitude of the crash location ("destination") of the aircraft
<b>The Google Maps Geocoding API</b> enables us to convert location strings to latitude and longitude data, which we can further visualise using plugins and Google Maps.
Importing the required libraries
End of explanation
# Global Config Variables
client_key = '&key=<insert_your_39_character_api_key_here>'
_URL_ = 'https://maps.googleapis.com/maps/api/geocode/xml?address='
count = 0
# same helper function as the Flight Crash Data Crawler
def makeBeautifulSoupObject(url):
# Use a `Session` instance to customize how `requests` handles making HTTP requests.
session = requests.Session()
# `mount` a custom adapter that retries failed connections for HTTP and HTTPS requests, in this case- 5 times
session.mount("http://", requests.adapters.HTTPAdapter(max_retries=5))
session.mount("https://", requests.adapters.HTTPAdapter(max_retries=5))
source_code = session.get(url=url)
plain_text = source_code.text.encode('utf8')
soup = BeautifulSoup(plain_text, "lxml")
return soup
Explanation: Declaring the important helper functions and global variables
You have to set up your <b>Geocoding API Key</b> before you proceed.
client_key: <b>insert your Google Maps Geocoding API client_key as specified below </b>. Check Geocoding API docs for help. <b>
_URL_: The Google Maps Geocoding API url constant. <b>Must not be changed.</b>
End of explanation
# Connecting to Mongo instance
client = MongoClient()
# specify the name of the db in brackets
db = client['aircrashdb']
# specify the name of the collection in brackets
collection = db['crawled_data']
Explanation: Connecting to the Mongo DB client running on the same machine.
Must change if the Mongo DB is running on a separate machine. Check MongoDB docs
End of explanation
def Request_and_append(address_field):
print (address_field)
print ('\n')
finalurl = _URL_ + address_field + client_key_sohail
soup = makeBeautifulSoupObject(finalurl)
lat_ = soup.find_all('lat')
long_ = soup.findAll('lng')
collection.find_one_and_update({'_id':cur["_id"]},{'$set':{'geolat':lat_[0].string}})
collection.find_one_and_update({'_id':cur["_id"]},{'$set':{'geolong':long_[0].string}})
print (lat_[0].string + ' & ' + long_[0].string + ' - DONE. \n')
Explanation: Helper Function to send request (to url- address_field) and append to the MongoDB collection
End of explanation
# for all the records in the collection
cursor = collection.find()
for cur in cursor:
print(cur["loc"])
if not cur["loc"] =='NULL':
# if the latitude and logitude of aircrash location do not exist
if not "geolat" in cur or not "geolong" in cur:
try:
if not cur['loc'] == 'NULL':
address_field = '+'.join(cur['loc'].split(' '))
Request_and_append(address_field)
count = count + 1
else:
print ("NULL- No Route Field")
except:
print ("COULD NOT PROCESS " + cur['loc'].encode('utf-8'))
new_attempt1 = cur['loc'].encode('utf-8').rpartition(',')[-1]
print ('trying : ' + new_attempt1)
try:
address_field = '+'.join(new_attempt1.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
except:
print ('New attempt has failed as well')
new_attempt2 = cur['loc'].encode('utf-8')
new_attempt2 = re.sub('[^0-9a-zA-Z ]+', '', new_attempt2)
arr = new_attempt2.split()
try:
i=0
for s in arr:
if (s.lower() == 'coast'):
new_attempt_final = (arr [i-1] + ' ' + arr[i]).encode('utf-8')
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
break
elif (s.lower() == 'ocean'):
new_attempt_final = (arr [i-1] + ' ' + arr[i]).encode('utf-8')
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
break
elif (s.lower() == 'sea'):
new_attempt_final = (arr [i-1] + ' ' + arr[i]).encode('utf-8')
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
break
elif (s.lower() == 'off'):
new_attempt_final = (' '.join(arr [i+1:])).encode('utf-8')
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
break
elif (s.lower() == 'persian'): # For persian gulf
new_attempt_final = (arr [i] + ' ' + arr[i+1]).encode('utf-8')
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
break
elif (s.lower() == 'gulf'):
new_attempt_final = (arr [i] + ' ' + arr[i+1]+ ' ' + arr[i+2]).encode('utf-8')
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
break
else:
new_attempt_final = arr [-1]
address_field = '+'.join(new_attempt_final.encode('utf-8').strip().split(' '))
Request_and_append(address_field)
i = i+1
i=0
except:
print ("I AM SORRY, THIS LOCATION CANNOT BE PROCESSED")
else:
# if the latitude and logitude of aircrash location ALREADY EXIST. This is in case this code block is run multiple times.
count = count + 1
print (cur['loc'].encode('utf-8')+' - ALREADY PROCESSED')
else:
print("ROUTE ===== NULL")
print (" TOTAL RECORDS THAT HAVE LATS AND LONGS: " + str(count))
Explanation: Extracting latitude and longitude data of the aircrash location, and appending to the MongoDB collection
There are a lot of try and except blocks as the location strings do not follow a nice format.
Some special cases have to be handled,
<b>Example:</b> if 'Off the coast of Peru' is sent the the geocoding api, it will return an error. Instead, the Peru should be sent. We won't get the exact location, but the best possible approximation. You could explore the dataset and find out why this is a problem.
Open to change. Please email me at [email protected] if you can think of a more elegant solution to handle special cases with the geocoding api
After running the code block below, the following fields should've be added to MongoDB Collection:
* <b>geolat:</b> the latitude of the aircrash site
* <b>geolong:</b> the longitude of the aircrash site
End of explanation
counter = 0
for cur in cursor:
print(cur["route"])
if not cur["route"]=='NULL':
if not "srclat" in cur and not "srclong" in cur or not "deslat" in cur and not "deslong" in cur:
try:
if not cur['route'] == 'NULL':
source_dest = cur["route"].split('-')
source_dest[0] = source_dest[0].strip()
source_dest[-1] = source_dest[-1].strip()
address_field1 = ' '.join(source_dest[0].split(' '))
print (address_field1)
address_field2 = ' '.join(source_dest[-1].split(' '))
print (address_field2)
print ('\n')
finalurl1 = url + address_field1 + client_key_sohail
finalurl2 = url + address_field2 + client_key_sohail
soup1 = makeBeautifulSoupObject(finalurl1)
soup2 = makeBeautifulSoupObject(finalurl2)
srclat = soup1.find_all('lat')
srclong = soup1.findAll('lng')
deslat = soup2.find_all('lat')
deslong = soup2.find_all('lng')
collection.find_one_and_update({'_id':cur["_id"]},{'$set':{'srclat':srclat[0].string}})
collection.find_one_and_update({'_id':cur["_id"]},{'$set':{'srclong':srclong[0].string}})
collection.find_one_and_update({'_id':cur["_id"]},{'$set':{'deslat':deslat[0].string}})
collection.find_one_and_update({'_id':cur["_id"]},{'$set':{'deslong':deslong[0].string}})
print (srclat[0].string)
print (srclong[0].string)
print (deslat[0].string)
print (deslong[0].string)
counter = counter +1
else:
print ("NULL- No Route Field")
except:
print ("COULD NOT PROCESS " + cur['route'].encode('utf-8'))
else:
print ("ALREADY PROCESSED: " + cur['route'].encode('utf-8'))
counter = counter +1
else:
print("ROUTE == NULL")
print ('TOTAL COUNTER: ' + str(counter))
Explanation: Extracting latitude and longitude data of the source and destination, and appending to the MongoDB collection
There are a few try and except blocks as the location strings do not follow a nice format.
Some special cases have to be handled,
Open to change. Please email me at [email protected] if you can think of a more elegant solution to handle special cases with the geocoding api
After running the code block below, the following fields should've be added to MongoDB Collection:
* <b>src_lat:</b> the latitude of the starting point (source) of the aircraft
* <b>src_long:</b> the longitude of the starting point (source) of the aircraft
* <b>dest_lat:</b> the latitude of the crash location ("destination") of the aircraft
* <b>dest_long:</b> the longitude of the crash location ("destination") of the aircraft
End of explanation
# Importing the required libraries
from xml.etree.ElementTree import ElementTree
from xml.etree.ElementTree import Element
import xml.etree.ElementTree as etree
import xml.dom.minidom
Explanation: Code for churning out XML files, that are used for visualisation purposes on the web app.
Please visit our website to view the final product (www.sykdesigns.com/GE2324)
Basically, the XML file(s) created using the code below were used for creating some neat looking Google Maps visualisations of the aircrashes/accidents.
End of explanation
root = Element('root')
tree = ElementTree(root)
for cur in cursor:
if "geolat" in cur and "geolong" in cur:
element = Element('element')
root.append(element)
date = Element('date')
date.text= str(cur['date'])
element.append(date)
lat = Element('lat')
lat.text= cur['geolat']
element.append(lat)
long = Element('long')
long.text= cur['geolong']
element.append(long)
fatal = Element('fatal')
if not cur['fatalities_total'] == 'NULL' and not cur['ground'] == 'NULL':
total_fatalities = int(cur['fatalities_total']) + int(cur['ground'])
fatal.text= str(total_fatalities)
elif cur['fatalities_total'] == 'NULL':
fatal.text= cur['ground']
elif cur['ground'] == 'NULL':
fatal.text= cur['fatalities_total']
else:
fatal.text= cur['fatalities_total']
element.append(fatal)
xml = xml.dom.minidom.parseString(etree.tostring(root))
pretty_xml_as_string = xml.toprettyxml()
print (pretty_xml_as_string)
with open(r'C:\Users\admin\Desktop\GE2324\crash_location_data_with_total_fatal.xml', "wb") as f:
f.write(pretty_xml_as_string.encode('utf-8'))
Explanation: Generating XML File with the following Schema:
-root- <br />
* -element-
* -date- -/date-
* -lat- -/lat-
* -long- -/long-
* -fatal- -/fatal-
* -/element-
-/root-
End of explanation
cursor = collection.find()
root = Element('root')
tree = ElementTree(root)
for cur in cursor:
if "srclat" in cur and "srclong" in cur and "deslat" in cur and "deslong" in cur:
element = Element('element')
root.append(element)
srclat = Element('srclat')
srclat.text= cur['srclat']
element.append(srclat)
srclong = Element('srclong')
srclong.text= cur['srclong']
element.append(srclong)
deslat = Element('deslat')
deslat.text= cur['deslat']
element.append(deslat)
deslong = Element('deslong')
deslong.text= cur['deslong']
element.append(deslong)
xml = xml.dom.minidom.parseString(etree.tostring(root))
pretty_xml_as_string = xml.toprettyxml()
print (pretty_xml_as_string)
with open('route_data.xml', "wb") as f:
f.write(pretty_xml_as_string.encode('utf-8'))
Explanation: This is what the XML should look like:
* here, <b>element:</b> represents each aircrash
* <b>fatal:</b> is the number of fatalities due to each aircraft
* others: self explanatory
<img src = 'crash_location_data_with_total_fatal_.PNG'>
When visualised on our website, this is what it looks like:
<img src = 'aircrash_route_viz.PNG'>
Generating XML File with the following Schema:
-root-
* -element-
* -srclat- -/srclat-
* -srclong- -/srclong-
* -deslat- -/deslat-
* -deslong- -/deslong-
* -/element-
-/root-
End of explanation |
8,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example 2 - Plot PSDs for a station
The intent of this series of Jupyter Notebooks is to demonstrate how metrics can be retrieved from an ISPAQ sqlite database and provide some ideas on how to use or plot those metrics.
This example plots station PSDs. It requires that we have the PSD values already calculated for the target for the requested
days, and those values should live in the example database, named ispaq_example.db. If you have not already, you can run this command in your ISPAQ conda environment
to have the values generated for the target-days in this example (it will take several minutes to run)
Step1: Now we need to set some variables.
Step2: The first step is to create a query that will be used to retrieve the psds.
Step3: Create a connection to the database and run the query, loading it into a pandas dataframe
Step4: At this point, we have created a query to retrieve the metrics from the SQLite database, used sqlite3 to connect to the database, retreieved the metrics, closed the connection, and then ensured that the start times are in a datetime format for plotting purposes.
This is what the dataframe looks like
Step5: The PSD plot will be power vs. frequency, so we are going to group them together by start time
Step6: We will take the information from the dataframe that we loaded and rearrange it for plotting.
Step7: Now that we have the dataframe in this arrangement, we can start plotting it up.
First, a colorful version of the plot
Step8: Then, just for the sake of variety, a black and white version of the version of the plot | Python Code:
import sqlite3
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdates
import datetime
Explanation: Example 2 - Plot PSDs for a station
The intent of this series of Jupyter Notebooks is to demonstrate how metrics can be retrieved from an ISPAQ sqlite database and provide some ideas on how to use or plot those metrics.
This example plots station PSDs. It requires that we have the PSD values already calculated for the target for the requested
days, and those values should live in the example database, named ispaq_example.db. If you have not already, you can run this command in your ISPAQ conda environment
to have the values generated for the target-days in this example (it will take several minutes to run):
python3 run_ispaq.py -M psd_corrected -S ANMO --starttime 2020-10-01 --endtime 2020-10-16 --output db --db_name ispaq_example.db
This example will assume that the above command has already been run and the PSDs already exist.
To begin, we need to import the necessary modules:
End of explanation
filename = 'PSD.png'
db_name = '../ispaq_example.db'
metric = 'psd_corrected'
startDate = '2020-10-01'
endDate = '2020-10-15'
target = 'IU.ANMO.00.BH1.M'
filename = f'example2_{target}_{startDate}_{endDate}_PSD.png'
filename2 = f'example2_{target}_{startDate}_{endDate}_PSD_bw.png'
Explanation: Now we need to set some variables.
End of explanation
SQLcommand = "SELECT * FROM " + metric + \
" WHERE start >= '" + startDate + "' " \
"and start < '" + endDate + "' " \
"and (target like '" + target + "');"
print('\nThis is the query used to retrieve the PSDs from the database:')
print(SQLcommand)
Explanation: The first step is to create a query that will be used to retrieve the psds.
End of explanation
try:
conn = sqlite3.connect(db_name)
DF = pd.read_sql_query(SQLcommand, conn, parse_dates=['start','end'])
conn.close
except:
print(f"Unable to connect to or find the {metric} table in the database {db_name}")
Explanation: Create a connection to the database and run the query, loading it into a pandas dataframe
End of explanation
print(DF)
Explanation: At this point, we have created a query to retrieve the metrics from the SQLite database, used sqlite3 to connect to the database, retreieved the metrics, closed the connection, and then ensured that the start times are in a datetime format for plotting purposes.
This is what the dataframe looks like:
End of explanation
DF = DF[['frequency','power','start']]
DFgrouped = DF.groupby(['start'])
print(DFgrouped)
Explanation: The PSD plot will be power vs. frequency, so we are going to group them together by start time:
End of explanation
plotDF = pd.DataFrame()
for name, group in DFgrouped:
tmpDF = pd.DataFrame()
tmpDF[name] = group['power']
tmpDF.set_axis(group['frequency'], axis='index', inplace=True)
plotDF = pd.concat([plotDF, tmpDF], axis=1, sort=False)
print("\nThis is the dataframe that will be used to plot:")
print(plotDF)
Explanation: We will take the information from the dataframe that we loaded and rearrange it for plotting.
End of explanation
ax = plotDF.plot(legend=False, alpha=1, title=f'{target}\n{startDate} through {endDate}')
ax.set_xscale('log')
ax.invert_xaxis()
ax.set_ylabel('power')
plt.grid(True)
plt.savefig(filename)
Explanation: Now that we have the dataframe in this arrangement, we can start plotting it up.
First, a colorful version of the plot:
End of explanation
ax2 = plotDF.plot(legend=False, alpha=.02, color='k', title=f'{target}\n{startDate} through {endDate}')
ax2.set_xscale('log')
ax2.invert_xaxis()
ax2.set_ylabel('power')
plt.grid(True)
plt.savefig(filename2)
Explanation: Then, just for the sake of variety, a black and white version of the version of the plot:
End of explanation |
8,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spark + Python = PySpark
Esse notebook introduz os conceitos básicos do Spark através de sua interface com a linguagem Python. Como aplicação inicial faremos o clássico examplo de contador de palavras . Com esse exemplo é possível entender a lógica de programação funcional para as diversas tarefas de exploração de dados distribuídos.
Para isso utilizaremos o livro texto Trabalhos completos de William Shakespeare obtidos do Projeto Gutenberg. Veremos que esse mesmo algoritmo pode ser empregado em textos de qualquer tamanho.
Esse notebook contém
Step2: (1b) Plural
Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD.
Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format().
Nota
Step3: (1c) Aplicando a função ao RDD
Transforme cada palavra do nosso RDD em plural usando map()
Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
Step4: Nota
Step5: (1e) Tamanho de cada palavra
Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
Step6: (1f) RDDs de pares e tuplas
Para contar a frequência de cada palavra de maneira distribuída, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuídos. No nosso caso, vamos atribuir o valor 1 para cada palavra.
Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD.
Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
Step7: Parte 2
Step8: (2b) Calculando as contagens
Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave.
Utilizando a transformação map() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
Step9: (2c) reduceByKey
Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas.
Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
Step10: (2d) Agrupando os comandos
A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
Step11: Parte 3
Step12: (3b) Calculando a Média de contagem de palavras
Encontre a média de frequência das palavras utilizando o RDD contagem.
Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
Step14: Parte 4
Step16: (4b) Normalizando o texto
Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para
Step17: (4c) Carregando arquivo texto
Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg.
Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições.
O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuído (ex.
Step18: (4d) Extraindo as palavras
Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD
Step19: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
Step20: (4e) Remover linhas vazias
Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
Step21: (4f) Contagem de palavras
Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras().
Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes.
takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.
Step23: Parte 5
Step27: (5b) Valores Categóricos
Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possível valor de cada atributo, se ele possui esse atributo ou não.
Com o vetor binário podemos utilizar a distância de Hamming definida por | Python Code:
ListaPalavras = ['gato', 'elefante', 'rato', 'rato', 'gato']
palavrasRDD = sc.parallelize(ListaPalavras, 4)
print type(palavrasRDD)
print palavrasRDD.collect()
Explanation: Spark + Python = PySpark
Esse notebook introduz os conceitos básicos do Spark através de sua interface com a linguagem Python. Como aplicação inicial faremos o clássico examplo de contador de palavras . Com esse exemplo é possível entender a lógica de programação funcional para as diversas tarefas de exploração de dados distribuídos.
Para isso utilizaremos o livro texto Trabalhos completos de William Shakespeare obtidos do Projeto Gutenberg. Veremos que esse mesmo algoritmo pode ser empregado em textos de qualquer tamanho.
Esse notebook contém:
Parte 1: Criando uma base RDD e RDDs de tuplas
Parte 2: Manipulando RDDs de tuplas
Parte 3: Encontrando palavras únicas e calculando médias
Parte 4: Aplicar contagem de palavras em um arquivo
Parte 5: Similaridade entre Objetos
Para os exercícios é aconselhável consultar a documentação da API do PySpark
Part 1: Criando e Manipulando RDDs
Nessa parte do notebook vamos criar uma base RDD a partir de uma lista com o comando parallelize.
(1a) Criando uma base RDD
Podemos criar uma base RDD de diversos tipos e fonte do Python com o comando sc.parallelize(fonte, particoes), sendo fonte uma variável contendo os dados (ex.: uma lista) e particoes o número de partições para trabalhar em paralelo.
End of explanation
# EXERCICIO
def Plural(palavra):
Adds an 's' to `palavra`.
Args:
palavra (str): A string.
Returns:
str: A string with 's' added to it.
return str.format(palavra+'s')
print Plural('gato')
help(Plural)
assert Plural('rato')=='ratos', 'resultado incorreto!'
print 'OK'
Explanation: (1b) Plural
Vamos criar uma função que transforma uma palavra no plural adicionando uma letra 's' ao final da string. Em seguida vamos utilizar a função map() para aplicar a transformação em cada palavra do RDD.
Em Python (e muitas outras linguagens) a concatenação de strings é custosa. Uma alternativa melhor é criar uma nova string utilizando str.format().
Nota: a string entre os conjuntos de três aspas representa a documentação da função. Essa documentação é exibida com o comando help(). Vamos utilizar a padronização de documentação sugerida para o Python, manteremos essa documentação em inglês.
End of explanation
# EXERCICIO
pluralRDD = palavrasRDD.map(lambda x: x+'s')
print pluralRDD.collect()
assert pluralRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
Explanation: (1c) Aplicando a função ao RDD
Transforme cada palavra do nosso RDD em plural usando map()
Em seguida, utilizaremos o comando collect() que retorna a RDD como uma lista do Python.
End of explanation
# EXERCICIO
pluralLambdaRDD = palavrasRDD.map(lambda x: x+'s')
print pluralLambdaRDD.collect()
assert pluralLambdaRDD.collect()==['gatos','elefantes','ratos','ratos','gatos'], 'valores incorretos!'
print 'OK'
Explanation: Nota: utilize o comando collect() apenas quando tiver certeza de que a lista caberá na memória. Para gravar os resultados de volta em arquivo texto ou base de dados utilizaremos outro comando.
(1d) Utilizando uma função lambda
Repita a criação de um RDD de plurais, porém utilizando uma função lambda.
End of explanation
# EXERCICIO
pluralTamanho = (pluralRDD
.map(lambda x: len(x))
.collect()
)
print pluralTamanho
assert pluralTamanho==[5,9,5,5,5], 'valores incorretos'
print "OK"
Explanation: (1e) Tamanho de cada palavra
Agora use map() e uma função lambda para retornar o número de caracteres em cada palavra. Utilize collect() para armazenar o resultado em forma de listas na variável destino.
End of explanation
# EXERCICIO
palavraPar = palavrasRDD.map(lambda x: (x, 1))
print palavraPar.collect()
assert palavraPar.collect() == [('gato',1),('elefante',1),('rato',1),('rato',1),('gato',1)], 'valores incorretos!'
print "OK"
Explanation: (1f) RDDs de pares e tuplas
Para contar a frequência de cada palavra de maneira distribuída, primeiro devemos atribuir um valor para cada palavra do RDD. Isso irá gerar um base de dados (chave, valor). Desse modo podemos agrupar a base através da chave, calculando a soma dos valores atribuídos. No nosso caso, vamos atribuir o valor 1 para cada palavra.
Um RDD contendo a estrutura de tupla chave-valor (k,v) é chamada de RDD de tuplas ou pair RDD.
Vamos criar nosso RDD de pares usando a transformação map() com uma função lambda().
End of explanation
# EXERCICIO
palavrasGrupo = palavraPar.groupByKey()
for chave, valor in palavrasGrupo.collect():
print '{0}: {1}'.format(chave, list(valor))
print palavrasGrupo.mapValues(lambda x: list(x)).collect()
assert sorted(palavrasGrupo.mapValues(lambda x: list(x)).collect()) ==[('rato', [1, 1]), ('elefante', [1]), ('gato', [1, 1])], 'Valores incorretos!'
print "OK"
Explanation: Parte 2: Manipulando RDD de tuplas
Vamos manipular nossa RDD para contar as palavras do texto.
(2a) Função groupByKey()
A função groupByKey() agrupa todos os valores de um RDD através da chave (primeiro elemento da tupla) agregando os valores em uma lista.
Essa abordagem tem um ponto fraco pois:
A operação requer que os dados distribuídos sejam movidos em massa para que permaneçam na partição correta.
As listas podem se tornar muito grandes. Imagine contar todas as palavras do Wikipedia: termos comuns como "a", "e" formarão uma lista enorme de valores que pode não caber na memória do processo escravo.
End of explanation
# EXERCICIO
contagemGroup = palavrasGrupo.mapValues(lambda x: sum(list(x)))
print contagemGroup.collect()
assert sorted(contagemGroup.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2b) Calculando as contagens
Após o groupByKey() nossa RDD contém elementos compostos da palavra, como chave, e um iterador contendo todos os valores correspondentes aquela chave.
Utilizando a transformação map() e a função sum(), contrua um novo RDD que consiste de tuplas (chave, soma).
End of explanation
# EXERCICIO
contagem = palavraPar.reduceByKey(lambda a, b : a+b)
print contagem.collect()
assert sorted(contagem.collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: (2c) reduceByKey
Um comando mais interessante para a contagem é o reduceByKey() que cria uma nova RDD de tuplas.
Essa transformação aplica a transformação reduce() vista na aula anterior para os valores de cada chave. Dessa forma, a função de transformação pode ser aplicada em cada partição local para depois ser enviada para redistribuição de partições, reduzindo o total de dados sendo movidos e não mantendo listas grandes na memória.
End of explanation
# EXERCICIO
contagemFinal = (palavrasRDD
.map(lambda x : (x, 1))
.reduceByKey(lambda a, b: a+b)
)
print contagemFinal.collect()
assert sorted(contagemFinal.collect())==[('rato', 2), ('elefante', 1), ('gato', 2)], 'valores incorretos!'
print "OK"
Explanation: (2d) Agrupando os comandos
A forma mais usual de realizar essa tarefa, partindo do nosso RDD palavrasRDD, é encadear os comandos map e reduceByKey em uma linha de comando.
End of explanation
# EXERCICIO
palavrasUnicas = (palavrasRDD
.map(lambda x : (x, 1))
.reduceByKey(lambda a, b: a+b)
).count()
print palavrasUnicas
assert palavrasUnicas==3, 'valor incorreto!'
print "OK"
print contagemFinal.collect()
Explanation: Parte 3: Encontrando as palavras únicas e calculando a média de contagem
(3a) Palavras Únicas
Calcule a quantidade de palavras únicas do RDD. Utilize comandos de RDD da API do PySpark e alguma das últimas RDDs geradas nos exercícios anteriores.
End of explanation
# EXERCICIO
# add é equivalente a lambda x,y: x+y
from operator import add
total = (contagemFinal
.map(lambda (x,y) : y)
.reduce(add)
)
media = total / float(palavrasUnicas)
print total
print round(media, 2)
assert round(media, 2)==1.67, 'valores incorretos!'
print "OK"
palavrasRDD.collect()
Explanation: (3b) Calculando a Média de contagem de palavras
Encontre a média de frequência das palavras utilizando o RDD contagem.
Note que a função do comando reduce() é aplicada em cada tupla do RDD. Para realizar a soma das contagens, primeiro é necessário mapear o RDD para um RDD contendo apenas os valores das frequências (sem as chaves).
End of explanation
# EXERCICIO
def contaPalavras(chavesRDD):
Creates a pair RDD with word counts from an RDD of words.
Args:
chavesRDD (RDD of str): An RDD consisting of words.
Returns:
RDD of (str, int): An RDD consisting of (word, count) tuples.
return (chavesRDD
.map(lambda x: (x, 1))
.reduceByKey(lambda x,y: x+y)
)
print contaPalavras(palavrasRDD).collect()
assert sorted(contaPalavras(palavrasRDD).collect())==[('elefante',1), ('gato',2), ('rato',2)], 'valores incorretos!'
print "OK"
Explanation: Parte 4: Aplicar nosso algoritmo em um arquivo
(4a) Função contaPalavras
Para podermos aplicar nosso algoritmo genéricamente em diversos RDDs, vamos primeiro criar uma função para aplicá-lo em qualquer fonte de dados. Essa função recebe de entrada um RDD contendo uma lista de chaves (palavras) e retorna um RDD de tuplas com as chaves e a contagem delas nessa RDD
End of explanation
# EXERCICIO
import re
def removerPontuacao(texto):
Removes punctuation, changes to lower case, and strips leading and trailing spaces.
Note:
Only spaces, letters, and numbers should be retained. Other characters should should be
eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after
punctuation is removed.
Args:
texto (str): A string.
Returns:
str: The cleaned up string.
return re.sub(r'[^A-Za-z0-9 ]', '', texto).strip().lower()
print removerPontuacao('Ola, quem esta ai??!')
print removerPontuacao(' Sem espaco e_sublinhado!')
assert removerPontuacao(' O uso de virgulas, embora permitido, nao deve contar. ')=='o uso de virgulas embora permitido nao deve contar', 'string incorreta!'
print "OK"
Explanation: (4b) Normalizando o texto
Quando trabalhamos com dados reais, geralmente precisamos padronizar os atributos de tal forma que diferenças sutis por conta de erro de medição ou diferença de normatização, sejam desconsideradas. Para o próximo passo vamos padronizar o texto para:
Padronizar a capitalização das palavras (tudo maiúsculo ou tudo minúsculo).
Remover pontuação.
Remover espaços no início e no final da palavra.
Crie uma função removerPontuacao que converte todo o texto para minúscula, remove qualquer pontuação e espaços em branco no início ou final da palavra. Para isso, utilize a biblioteca re para remover todo texto que não seja letra, número ou espaço, encadeando com as funções de string para remover espaços em branco e converter para minúscula (veja Strings).
End of explanation
# Apenas execute a célula
import os.path
import urllib
url = 'http://www.gutenberg.org/cache/epub/100/pg100.txt' # url do livro
arquivo = os.path.join('Data','Aula02','pg100.txt') # local de destino: 'Data/Aula02/shakespeare.txt'
if os.path.isfile(arquivo): # verifica se já fizemos download do arquivo
print 'Arquivo já existe!'
else:
try:
urllib.urlretrieve(url, arquivo) # salva conteúdo da url em arquivo
except IOError:
print 'Impossível fazer o download: {0}'.format(url)
# lê o arquivo com textFile e aplica a função removerPontuacao
shakesRDD = (sc
.textFile(arquivo, 8)
.map(removerPontuacao)
)
# zipWithIndex gera tuplas (conteudo, indice) onde indice é a posição do conteudo na lista sequencial
# Ex.: sc.parallelize(['gato','cachorro','boi']).zipWithIndex() ==> [('gato',0), ('cachorro',1), ('boi',2)]
# sep.join() junta as strings de uma lista através do separador sep. Ex.: ','.join(['a','b','c']) ==> 'a,b,c'
print '\n'.join(shakesRDD
.zipWithIndex()
.map(lambda (linha, num): '{0}: {1}'.format(num,linha))
.take(15)
)
Explanation: (4c) Carregando arquivo texto
Para a próxima parte vamos utilizar o livro Trabalhos completos de William Shakespeare do Projeto Gutenberg.
Para converter um texto em uma RDD, utilizamos a função textFile() que recebe como entrada o nome do arquivo texto que queremos utilizar e o número de partições.
O nome do arquivo texto pode se referir a um arquivo local ou uma URI de arquivo distribuído (ex.: hdfs://).
Vamos também aplicar a função removerPontuacao() para normalizar o texto e verificar as 15 primeiras linhas com o comando take().
End of explanation
# EXERCICIO
shakesPalavrasRDD = shakesRDD.map(lambda x: x.split())
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.take(5)
print total
Explanation: (4d) Extraindo as palavras
Antes de poder usar nossa função Before we can use the contaPalavras(), temos ainda que trabalhar em cima da nossa RDD:
Precisamos gerar listas de palavras ao invés de listas de sentenças.
Eliminar linhas vazias.
As strings em Python tem o método split() que faz a separação de uma string por separador. No nosso caso, queremos separar as strings por espaço.
Utilize a função map() para gerar um novo RDD como uma lista de palavras.
End of explanation
# EXERCICIO
shakesPalavrasRDD = shakesRDD.flatMap(lambda x: x.split())
total = shakesPalavrasRDD.count()
print shakesPalavrasRDD.top(5)
print total
assert total==927631 or total == 928908, "valor incorreto de palavras!"
print "OK"
assert shakesPalavrasRDD.top(5)==[u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],'lista incorreta de palavras'
print "OK"
Explanation: Conforme deve ter percebido, o uso da função map() gera uma lista para cada linha, criando um RDD contendo uma lista de listas.
Para resolver esse problema, o Spark possui uma função análoga chamada flatMap() que aplica a transformação do map(), porém achatando o retorno em forma de lista para uma lista unidimensional.
End of explanation
# EXERCICIO
shakesLimpoRDD = shakesPalavrasRDD.filter(lambda x: len(x) > 0)
total = shakesLimpoRDD.count()
print total
assert total==882996, 'valor incorreto!'
print "OK"
Explanation: (4e) Remover linhas vazias
Para o próximo passo vamos filtrar as linhas vazias com o comando filter(). Uma linha vazia é uma string sem nenhum conteúdo.
End of explanation
# EXERCICIO
top15 = contaPalavras(shakesLimpoRDD).takeOrdered(15, key=lambda x: -x[1])
print '\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15))
print top15
assert top15 == [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),
(u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),
(u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],'valores incorretos!'
print "OK"
Explanation: (4f) Contagem de palavras
Agora que nossa RDD contém uma lista de palavras, podemos aplicar nossa função contaPalavras().
Aplique a função em nossa RDD e utilize a função takeOrdered para imprimir as 15 palavras mais frequentes.
takeOrdered() pode receber um segundo parâmetro que instrui o Spark em como ordenar os elementos. Ex.:
takeOrdered(15, key=lambda x: -x): ordem decrescente dos valores de x
End of explanation
import numpy as np
# Vamos criar uma função pNorm que recebe como parâmetro p e retorna uma função que calcula a pNorma
def pNorm(p):
Generates a function to calculate the p-Norm between two points.
Args:
p (int): The integer p.
Returns:
Dist: A function that calculates the p-Norm.
def Dist(x,y):
return np.power(np.power(np.abs(x-y),p).sum(),1/float(p))
return Dist
# Vamos criar uma RDD com valores numéricos
numPointsRDD = sc.parallelize(enumerate(np.random.random(size=(10,100))))
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartPointsRDD = numPointsRDD.cartesian(numPointsRDD)
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartPointsParesRDD = cartPointsRDD.map(lambda vetor: ((vetor[0][0],vetor[1][0]), (vetor[0][1],vetor[1][1])))
#print cartPointsParesRDD.take(1)
# Aplique um mapa para calcular a Distância Euclidiana entre os pares
Euclid = pNorm(2)
distRDD = cartPointsParesRDD.map(lambda vetor: (vetor[0], Euclid(vetor[1][0],vetor[1][1])))
# Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
statRDD = distRDD.map(lambda vetor: vetor[1])
minv, maxv, meanv = statRDD.min(), statRDD.max(), statRDD.mean()
print minv.round(2), maxv.round(2), meanv.round(2)
assert (minv.round(2), maxv.round(2), meanv.round(2))==(0.0, 4.70, 3.65), 'Valores incorretos'
print "OK"
Explanation: Parte 5: Similaridade entre Objetos
Nessa parte do laboratório vamos aprender a calcular a distância entre atributos numéricos, categóricos e textuais.
(5a) Vetores no espaço Euclidiano
Quando nossos objetos são representados no espaço Euclidiano, medimos a similaridade entre eles através da p-Norma definida por:
$$d(x,y,p) = (\sum_{i=1}^{n}{|x_i - y_i|^p})^{1/p}$$
As normas mais utilizadas são $p=1,2,\infty$ que se reduzem em distância absoluta, Euclidiana e máxima distância:
$$d(x,y,1) = \sum_{i=1}^{n}{|x_i - y_i|}$$
$$d(x,y,2) = (\sum_{i=1}^{n}{|x_i - y_i|^2})^{1/2}$$
$$d(x,y,\infty) = \max(|x_1 - y_1|,|x_2 - y_2|, ..., |x_n - y_n|)$$
End of explanation
# Vamos criar uma função para calcular a distância de Hamming
def Hamming(x,y):
Calculates the Hamming distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
H (int): The Hamming distance between x and y.
return (x!=y).sum()
# Vamos criar uma função para calcular a distância de Jaccard
def Jaccard(x,y):
Calculates the Jaccard distance between two binary vectors.
Args:
x, y (np.array): Array of binary integers x and y.
Returns:
J (int): The Jaccard distance between x and y.
return (x==y).sum()/float( np.maximum(x,y).sum() )
# Vamos criar uma RDD com valores categóricos
catPointsRDD = sc.parallelize(enumerate([['alto', 'caro', 'azul'],
['medio', 'caro', 'verde'],
['alto', 'barato', 'azul'],
['medio', 'caro', 'vermelho'],
['baixo', 'barato', 'verde'],
]))
#print catPointsRDD.collect()
#print catPointsRDD.flatMap(lambda x: map(lambda z: (z, 1), x[1])).collect()
#print catPointsRDD.flatMap(lambda x: map(lambda z: (z, 1), x[1])).reduceByKey(lambda x,z: x).collect()
print catPointsRDD.flatMap(lambda x: map(lambda z: (z, 1), x[1])).reduceByKey(lambda x,z: x).map(lambda x: x[0]).collect()
# EXERCICIO
# Crie um RDD de chaves únicas utilizando flatMap
#print catPointsRDD.collect()
#print catPointsRDD.flatMap(lambda x: map(lambda z: (z, 1), x[1])).collect()
#print catPointsRDD.flatMap(lambda x: map(lambda z: (z, 1), x[1])).reduceByKey(lambda x,z: x).collect()
#print catPointsRDD.flatMap(lambda x: map(lambda z: (z, 1), x[1])).reduceByKey(lambda x,z: x).map(lambda x: x[0]).collect()
chavesRDD = (catPointsRDD
.flatMap(lambda x: map(lambda z: (z, 1), x[1]))
.reduceByKey(lambda x, y: x)
.map(lambda x: x[0])
)
chaves = dict((v,k) for k,v in enumerate(chavesRDD.collect()))
nchaves = len(chaves)
print chaves, nchaves
assert chaves=={'alto': 0, 'medio': 7, 'baixo': 5, 'barato': 2, 'azul': 4, 'verde': 6, 'caro': 3, 'vermelho': 1}, 'valores incorretos!'
print "OK"
assert nchaves==8, 'número de chaves incorreta'
print "OK"
def CreateNP(atributos,chaves):
Binarize the categorical vector using a dictionary of keys.
Args:
atributos (list): List of attributes of a given object.
chaves (dict): dictionary with the relation attribute -> index
Returns:
array (np.array): Binary array of attributes.
array = np.zeros(len(chaves))
for atr in atributos:
array[ chaves[atr] ] = 1
return array
# Converte o RDD para o formato binário, utilizando o dict chaves
#print catPointsRDD.collect()
binRDD = catPointsRDD.map(lambda rec: (rec[0],CreateNP(rec[1], chaves)))
binRDD.collect()
# EXERCICIO
# Procure dentre os comandos do PySpark, um que consiga fazer o produto cartesiano da base com ela mesma
cartBinRDD = binRDD.cartesian(binRDD)
# Aplique um mapa para transformar nossa RDD em uma RDD de tuplas ((id1,id2), (vetor1,vetor2))
# DICA: primeiro utilize o comando take(1) e imprima o resultado para verificar o formato atual da RDD
cartBinParesRDD = cartBinRDD.map(lambda matrix: ((matrix[0][0], matrix[1][0]), (matrix[0][1], matrix[1][1])))
# Aplique um mapa para calcular a Distância de Hamming e Jaccard entre os pares
hamRDD = cartBinParesRDD.map(lambda matrix: (matrix[0], Hamming(matrix[1][0],matrix[1][1])))
jacRDD = cartBinParesRDD.map(lambda matrix: (matrix[0], Jaccard(matrix[1][0],matrix[1][1])))
# Encontre a distância máxima, mínima e média, aplicando um mapa que transforma (chave,valor) --> valor
# e utilizando os comandos internos do pyspark para o cálculo da min, max, mean
statHRDD = hamRDD.map(lambda matrix: matrix[1])
statJRDD = jacRDD.map(lambda matrix: matrix[1])
Hmin, Hmax, Hmean = statHRDD.min(), statHRDD.max(), statHRDD.mean()
Jmin, Jmax, Jmean = statJRDD.min(), statJRDD.max(), statJRDD.mean()
print "\t\tMin\tMax\tMean"
print "Hamming:\t{:.2f}\t{:.2f}\t{:.2f}".format(Hmin, Hmax, Hmean )
print "Jaccard:\t{:.2f}\t{:.2f}\t{:.2f}".format( Jmin, Jmax, Jmean )
assert (Hmin.round(2), Hmax.round(2), Hmean.round(2)) == (0.00,6.00,3.52), 'valores incorretos'
print "OK"
assert (Jmin.round(2), Jmax.round(2), Jmean.round(2)) == (0.33,2.67,1.14), 'valores incorretos'
print "OK"
Explanation: (5b) Valores Categóricos
Quando nossos objetos são representados por atributos categóricos, eles não possuem uma similaridade espacial. Para calcularmos a similaridade entre eles podemos primeiro transformar nosso vetor de atrbutos em um vetor binário indicando, para cada possível valor de cada atributo, se ele possui esse atributo ou não.
Com o vetor binário podemos utilizar a distância de Hamming definida por:
$$ H(x,y) = \sum_{i=1}^{n}{x_i != y_i} $$
Também é possível definir a distância de Jaccard como:
$$ J(x,y) = \frac{\sum_{i=1}^{n}{x_i == y_i} }{\sum_{i=1}^{n}{\max(x_i, y_i}) } $$
End of explanation |
8,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability tutorial
Problems by Peter Komar
18 Jul 2016
Sample problems from Peter Komar; after trying to analytically solve everything, Monte Carlo and see if I'm right.
Step1: Forward probability
Question 1
Q1
Step2: Q2
Step3: Question 2
Q1
Step4: Q2
Step5: Question 3
Q
Step6: Question 4
Q1
Step7: Q2
Step8: Question 5
Q
Step9: Question 6
Q
Step10: Question 7
Q
Step11: Question 8
Q1
Step12: Q2
Step13: Q3 | Python Code:
def compare(analytic,N,f):
errval = err(f,N)
successes = sum(f)
print "Analytic prediction: {:.0f}%.".format(analytic*100.)
print "Monte Carlo: {:.0f} +- {:.0f}%.".format(successes/float(N)*100.,errval*100.)
def err(fx,N):
# http://www.northeastern.edu/afeiguin/phys5870/phys5870/node71.html
f2 = [x*x for x in fx]
return np.sqrt((1./N * sum(f2) - (1./N * sum(fx))**2)/float(N))
Explanation: Probability tutorial
Problems by Peter Komar
18 Jul 2016
Sample problems from Peter Komar; after trying to analytically solve everything, Monte Carlo and see if I'm right.
End of explanation
import numpy as np
from numpy.random import binomial
# Default is 1000 trials each
N = 1000
p_rain_sat = 0.5
p_rain_sun = 0.2
p_light_sat = 0.9
p_heavy_sat = 0.1
p_light_sun = 1.0
p_heavy_sun = 0.0
f = []
for i in range(N):
# Light rain on Saturday?
rain_sat = binomial(1,p_rain_sat)
if rain_sat:
light_sat = binomial(1,p_light_sat)
else:
light_sat = 0
# Light rain on Sunday?
rain_sun = binomial(1,p_rain_sun)
if rain_sun:
light_sun = binomial(1,p_light_sun)
else:
light_sun = 0
if light_sat and light_sun:
f.append(1)
else:
f.append(0)
compare(9/100.,N,f)
Explanation: Forward probability
Question 1
Q1: What is the probability of light rain on both days?
End of explanation
f = []
for i in range(N):
# Light rain on either day?
rain_sat = binomial(1,p_rain_sat)
rain_sun = binomial(1,p_rain_sun)
if rain_sat or rain_sun:
f.append(1)
else:
f.append(0)
compare(60/100.,N,f)
Explanation: Q2: What is the probability of rain during the weekend?
End of explanation
from random import randint
f = []
for i in range(N):
# Draw candy from bag 1
r1 = randint(0,6)
if r1 < 3:
candy1 = "taffy"
else:
candy1 = "caramel"
# Draw candy from bag 2
r2 = randint(0,5)
if r2 == 0:
candy2 = "taffy"
else:
candy2 = "caramel"
if candy1 is not candy2:
f.append(1)
else:
f.append(0)
compare(19/42.,N,f)
Explanation: Question 2
Q1: With what probability are the two drawn pieces of candy different?
End of explanation
f = []
for i in range(N):
# Choose the bag
bag = binomial(1,0.5)
if bag:
# Bag 1
# First draw
r1 = randint(0,6)
if r1 < 3:
candy1 = "taffy"
else:
candy1 = "caramel"
# Second draw
r2 = randint(0,5)
if candy1 is "taffy":
if r2 < 2:
candy2 = "taffy"
else:
candy2 = "caramel"
else:
if r2 < 3:
candy2 = "taffy"
else:
candy2 = "caramel"
else:
# Bag 2
# First draw
r1 = randint(0,5)
if r1 < 2:
candy1 = "taffy"
else:
candy1 = "caramel"
# Second draw
r2 = randint(0,4)
if candy1 is "caramel":
if r2 < 4:
candy2 = "caramel"
else:
candy2 = "taffy"
else:
candy2 = "caramel"
if candy1 is not candy2:
f.append(1)
else:
f.append(0)
compare(23/42.,N,f)
Explanation: Q2: With what probability are the two drawn pieces of candy different if they are drawn from the same (but randomly chosen) bag?
End of explanation
p_H = 0.5
f = []
for i in range(N):
# Flip coin 1
c1 = binomial(1,p_H)
# Flip coin 2
c2 = binomial(1,p_H)
# Flip coin 3
c3 = binomial(1,p_H)
total_heads = c1 + c2 + c3
# Three heads
if total_heads == 3:
reward = 100
if total_heads == 2:
reward = 40
if total_heads == 1:
reward = 0
if total_heads == 0:
reward = -200
f.append(reward)
print "Analytic: {:.2f} +- {:.0f}".format(20/8.,82)
print "Monte Carlo: {:.2f} +- {:.0f}".format(np.mean(f),np.std(f))
Explanation: Question 3
Q: What is the expectation value and standard deviation of the reward?
End of explanation
n = 10
f = []
for i in range(N):
line = range(n)
np.random.shuffle(line)
# Assume Potter, Granger, Weasley correspond to 0, 1, and 2
indices = [line.index(person) for person in (0,1,2)]
if max(indices) - min(indices) == 2:
f.append(1)
compare(1/15.,N,f)
Explanation: Question 4
Q1: What is the probability that Potter, Granger, and Weasley are standing next to each other?
End of explanation
f = []
for i in range(N):
line = range(n)
np.random.shuffle(line)
# Assume Potter, Granger, Weasley correspond to 0, 1, and 2
indices = [line.index(person) for person in (0,1,2)]
if max(indices) - min(indices) == 2:
f.append(1)
else:
# Shift line halfway around and check again
line = list(np.roll(line,n//2))
indices = [line.index(person) for person in (0,1,2)]
if max(indices) - min(indices) == 2:
f.append(1)
compare(1/12.,N,f)
Explanation: Q2: What is the probability that Potter, Granger, and Weasley are standing next to each other if the line is a circle?
End of explanation
f = []
for i in range(N):
guys = ['a','b','c','d','e']
gals = ['alpha','beta','gamma','delta','epsilon']
np.random.shuffle(guys)
np.random.shuffle(gals)
if guys.index('c') == gals.index('gamma'):
f.append(1)
compare(1./5,N,f)
Explanation: Question 5
Q: What is the probability that c dances with gamma?
End of explanation
f = []
for i in range(N):
fellows = range(21)
np.random.shuffle(fellows)
# Derrick = 0, Gaurav = 1
group_derrick = fellows.index(0)//7
group_gaurav = fellows.index(1)//7
if group_derrick == group_gaurav:
f.append(1)
compare(0.30,N,f)
Explanation: Question 6
Q: What is the probability that Derrick and Gaurav end up in the same group?
End of explanation
f = []
for i in range(N):
a,b,c,d = 0,0,0,0
for candy in range(10):
selection = randint(0,3)
if selection == 0:
a += 1
if selection == 1:
b += 1
if selection == 2:
c += 1
if selection == 3:
d += 1
if a == 0:
f.append(1)
compare(0.75**10,N,f)
Explanation: Question 7
Q: What is the probability that stocking A gets no candy?
End of explanation
n = 20
f = []
for i in range(N):
throws = np.random.randint(1,11,n)
counts = np.bincount(throws)
if counts[1] == 2:
f.append(1)
analytic = 10**(np.log10(190) + 18*np.log10(9) - 20)
compare(analytic,N,f)
Explanation: Question 8
Q1: What is the probability that we get two 1s in the first twenty throws?
End of explanation
n = 10
f = []
for i in range(N):
throws = np.random.randint(1,11,n)
counts = np.bincount(throws)
if counts[1] == 1 and throws[-1] == 1:
f.append(1)
analytic = 0.9**9 * 0.1
compare(analytic,N,f)
Explanation: Q2: What is the probability that we get the first 1 in the tenth throw?
End of explanation
n = 30
f = []
for i in range(N):
throws = np.random.randint(1,11,n)
counts = np.bincount(throws)
if counts[1] == 3 and throws[-1] == 1:
f.append(1)
analytic = (29*28/2. * 0.9**27 * 0.1**2) * 0.1
compare(analytic,N,f)
Explanation: Q3: What is the probability that we get the third 1 on the thirtieth throw?
End of explanation |
8,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Keras Tutorial
http
Step1: 2. Construindo DFNs com Keras
Reshaping MNIST data
Step2: 3. Construindo CNNs com Keras
Reshaping MNIST data
Step3: Compilando e ajustando CNN
Step4: Comparamos resultados
Step5: Vamos observar alguns exemplos mal classificados | Python Code:
import util
import numpy as np
import keras
from keras.utils import np_utils
X_train, y_train, X_test, y_test = util.load_mnist_dataset()
y_train_labels = np.array(util.get_label_names(y_train))
# Converte em one-hot para treino
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
#Mostra algumas imagens
examples = np.random.randint(0, X_train.shape[0] - 9, 9)
image_shape = (X_train.shape[2], X_train.shape[3])
util.plot9images(X_train[examples], y_train_labels[examples], image_shape)
Explanation: Keras Tutorial
http://keras.io
Esse tutorial é uma versão simplificada do tutorial disponível em: https://github.com/MLIME/Frameworks/tree/master/Keras
O que é Keras?
Keras is a high-level neural networks API, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research.
Esse tutorial é dividido em três partes
Funcionamento Básico do Keras
Exemplo de Deep Feedforward Network
Exemplo de Convolutional Neural Network
1. Funcionamento básico do Keras
Backends
Theano ou TensorFlow (CPU ou GPU)
Tipos de Layers
Core layers: Dense, Activation, Dropout, Flatten
Convolutional layers: ConvXD, CroppingXD, UpSamplingXD
Pooling Layers: MaxPoolingXD, AveragePoolingXD
Custom layers can be created
Funções de perda
categorical_crossentropy
sparse_categorical_crossentropy
binary_crossentropy
mean_squared_error
mean_absolute_error
Otimizadores
SGD
RMSprop
Adagrad
Adadelta
Adam
Adamax
Ativações
softmax
elu
relu
tanh
sigmoid
hard_sigmoid
linear
Inicializadores
Zeros
RandomNormal
RandomUniform
TruncatedNormal
VarianceScaling
Orthogonal
Identity
lecun_uniform
glorot_normal
glorot_uniform
he_normal
he_uniform
Inicialização
Importamos bibliotecas e carregamos os dados
End of explanation
#Achatamos imagem em um vetor
X_train = X_train.reshape(X_train.shape[0], np.prod(X_train.shape[1:]))
X_test = X_test.reshape(X_test.shape[0], np.prod(X_test.shape[1:]))
#Sequential é a API que permite construirmos um modelo ao adicionar incrementalmente layers
from keras.models import Sequential
from keras.layers import Dense, Flatten
from keras.optimizers import SGD
DFN = Sequential()
DFN.add(Dense(128, input_shape=(28*28,), activation='relu'))
DFN.add(Dense(128, activation='relu'))
DFN.add(Dense(128, activation='relu'))
DFN.add(Dense(10, activation='softmax'))
#optim = SGD(lr=0.01 ) - pode construir o otimizador por fora para definir parametros
DFN.compile(loss='categorical_crossentropy',
optimizer='sgd', #ou usar os parâmetros padrão
metrics=['accuracy'])
DFN.fit(X_train, y_train, batch_size=32, epochs=2,
validation_split=0.2,
verbose=1)
print('\nAccuracy: %.2f' % DFN.evaluate(X_test, y_test, verbose=1)[1])
Explanation: 2. Construindo DFNs com Keras
Reshaping MNIST data
End of explanation
X_train = X_train.reshape(X_train.shape[0], 28, 28, 1)
X_test = X_test.reshape(X_test.shape[0], 28, 28, 1)
Explanation: 3. Construindo CNNs com Keras
Reshaping MNIST data
End of explanation
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import MaxPooling2D
from keras.layers.convolutional import Conv2D
CNN = Sequential()
CNN.add(Conv2D(32, (3, 3), padding='same', activation='relu',
input_shape=(28, 28, 1),))
CNN.add(MaxPooling2D(pool_size=(2, 2)))
CNN.add(Conv2D(32, (3, 3), padding='same', activation='relu'))
CNN.add(MaxPooling2D(pool_size=(2, 2)))
CNN.add(Dropout(0.25))
CNN.add(Flatten())
CNN.add(Dense(256, activation='relu'))
CNN.add(Dropout(0.5))
CNN.add(Dense(10, activation='softmax'))
CNN.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
CNN.fit(X_train, y_train, batch_size=32, epochs=2,
validation_split=0.2,
verbose=1)
print('\nAccuracy: %.2f' % CNN.evaluate(X_test, y_test, verbose=1)[1])
Explanation: Compilando e ajustando CNN
End of explanation
cnn_pred = CNN.predict(X_test, verbose=1)
dfn_pred = DFN.predict(X_test.reshape((X_test.shape[0], np.prod(X_test.shape[1:]))), verbose=1)
cnn_pred = np.array(list(map(np.argmax, cnn_pred)))
dfn_pred = np.array(list(map(np.argmax, dfn_pred)))
y_pred = np.array(list(map(np.argmax, y_test)))
util.plotconfusion(util.get_label_names(y_pred), util.get_label_names(dfn_pred))
util.plotconfusion(util.get_label_names(y_pred), util.get_label_names(cnn_pred))
Explanation: Comparamos resultados:
End of explanation
cnn_missed = cnn_pred != y_pred
dfn_missed = dfn_pred != y_pred
cnn_and_dfn_missed = np.logical_and(dfn_missed, cnn_missed)
util.plot_missed_examples(X_test, y_pred, dfn_missed, dfn_pred)
util.plot_missed_examples(X_test, y_pred, cnn_missed, cnn_pred)
util.plot_missed_examples(X_test, y_pred, cnn_and_dfn_missed)
Explanation: Vamos observar alguns exemplos mal classificados:
End of explanation |
8,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression simulation
Step 1
Step1: Step 4/5/6 part A
Step2: Now graphing this data
Step3: Step 4/5/6 part b
Step4: Now graphing it
Step5: Step 7
Step6: Tune parameters for RF and KNN
Step7: Conclude that for KNN, optimal number of neighbors is around 29. For random forest, optimal depth 10, features 2.
Retry KNN and RF regressions with new params | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import urllib2
from __future__ import division
np.random.seed(1)
url = ('https://raw.githubusercontent.com/Upward-Spiral-Science'
'/data/master/syn-density/output.csv')
data = urllib2.urlopen(url)
csv = np.genfromtxt(data, delimiter=",")[1:] # don't want first row (labels)
# chopping data based on thresholds on x and y coordinates
x_bounds = (409, 3529)
y_bounds = (1564, 3124)
def check_in_bounds(row, x_bounds, y_bounds):
if row[0] < x_bounds[0] or row[0] > x_bounds[1]:
return False
if row[1] < y_bounds[0] or row[1] > y_bounds[1]:
return False
if row[3] == 0:
return False
return True
indices_in_bound, = np.where(np.apply_along_axis(check_in_bounds, 1, csv, x_bounds, y_bounds))
data_thresholded = csv[indices_in_bound]
n = data_thresholded.shape[0]
data = data_thresholded
data[:, -2] = data[:,-1]/data[:,-2]
data = data[:, :-1]
print data[:, -1]
# normalize density so they're not super tiny values
data[:, -1] -= np.average(data[:, -1])
data[:, -1] /= np.std(data[:, -1])
print data[:, -1]
Explanation: Regression simulation
Step 1: Assumptions
Assume that synaptic density (synpases/unmasked), Y, follows some joint distribution $F_{Y \mid X}$ where $X$ is the set of data, which are vectors in $\mathbb{R}^3$ and its elements correspond, respectively, to x, y, z coordinates given by the data.
Step 2: Define model
Let the true values of density correspond to the set Y, and let the joint distribution be parameterized by $\theta$. So for each $x_i \in X \textrm{ and } y_i \in Y \ , F(x;\theta)=y$.
We want to find parameters $\hat \theta$ such that we minimize a loss function $l(\hat y, y)$, where $\hat y = F(x;\hat \theta)$.
Step 3: Algorithms
Linear Regression
Support Vector Regression (SVR)
K-Nearest Neighbor Regression (KNN)
Random Forest Regression (RF)
Polynomial Regression
Setup
End of explanation
mins = [np.min(csv[:,i]) for i in xrange(4)]
maxs = [np.max(csv[:,i]) for i in xrange(4)]
domains = zip(mins, maxs)
# sample sizes
S = np.logspace(2.0, 4.0, num=20, base=10.0, dtype='int')
null_X = np.array([[np.random.randint(*domains[i]) for i in xrange(3)]
for k in xrange(S[-1])])
null_Y = np.random.uniform(*domains[-1], size=S[-1])
print null_X.shape, null_Y.shape
# load our regressions
from sklearn.linear_model import LinearRegression
from sklearn.svm import LinearSVR
from sklearn.neighbors import KNeighborsRegressor as KNN
from sklearn.ensemble import RandomForestRegressor as RF
from sklearn.preprocessing import PolynomialFeatures as PF
from sklearn.pipeline import Pipeline
from sklearn import cross_validation
names = ['Linear Regression','SVR','KNN Regression','Random Forest Regression','Polynomial Regression']
regressions = [LinearRegression(),
LinearSVR(C=1.0),
KNN(n_neighbors=10, algorithm='auto'),
RF(max_depth=5, max_features=1),
Pipeline([('poly', PF(degree=2)),('linear', LinearRegression(fit_intercept=False))])]
r2 = np.zeros((len(S), len(regressions), 2), dtype=np.dtype('float64'))
#iterate over sample sizes and regression algos
for i, N in enumerate(S):
# Randomly sample from synthetic data with sample size N
a = np.random.permutation(np.arange(S[-1]))[:N]
X = null_X[a]
Y = null_Y[a]
Y = np.ravel(Y)
print "Sample size = ", N
for k, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
r2[i, k, :] = [scores.mean(), scores.std()]
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[k], scores.mean(), scores.std() * 2))
Explanation: Step 4/5/6 part A: Null distribution
No relationship, i.e. density is independent of position. Let's just let density be uniform across the entire 3D space defined by the dataset. So the target variable Y, i.e. unmasked, follows a uniform distribution.
End of explanation
plt.errorbar(S, r2[:,0,0], yerr = r2[:,0,1], hold=True, label=names[0])
plt.errorbar(S, r2[:,1,0], yerr = r2[:,1,1], color='green', hold=True, label=names[1])
plt.errorbar(S, r2[:,2,0], yerr = r2[:,2,1], color='red', hold=True, label=names[2])
plt.errorbar(S, r2[:,3,0], yerr = r2[:,3,1], color='black', hold=True, label=names[3])
plt.errorbar(S, r2[:,4,0], yerr = r2[:,4,1], color='brown', hold=True, label=names[4])
plt.xscale('log')
plt.axhline(1, color='red', linestyle='--')
plt.xlabel('Sample size')
plt.ylabel('R^2 Score')
plt.title('Regression results on simulated data under the null')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
Explanation: Now graphing this data:
End of explanation
# X under the alt same as under the null
alt_X = null_X
f_X = np.apply_along_axis(lambda r: reduce(lambda x,y:x+y, r)/3, 1, alt_X)
f_X -= np.average(f_X)
f_X /= np.std(f_X)
alt_Y = np.random.normal(0, .01, size=f_X.shape)+f_X
print alt_Y.shape
r2 = np.zeros((len(S), len(regressions), 2), dtype=np.dtype('float64'))
#iterate over sample sizes and regression algos
for i, N in enumerate(S):
# Randomly sample from synthetic data with sample size N
a = np.random.permutation(np.arange(S[-1]))[:N]
X = alt_X[a]
Y = alt_Y[a]
Y = np.ravel(Y)
print "Sample size = ", N
for k, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
r2[i, k, :] = [scores.mean(), scores.std()]
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[k], scores.mean(), scores.std() * 2))
Explanation: Step 4/5/6 part b: Alternate distribution
Here we want assume a conditional dependence. Let's keep the x, y, z uniformly distributed across the sample space, but let density, $y_i$, be the sum of a deterministic function, $f:\mathbb{R}^3 \rightarrow \mathbb{R}$, and $\epsilon$ some Gaussian noise with low std dev and 0 mean. Let $f(x,y,z)=x+y+z$ (normalized over the average of all f). Let the variance of $\epsilon$ be .01.
End of explanation
plt.errorbar(S, r2[:,0,0], yerr = r2[:,0,1], hold=True, label=names[0])
plt.errorbar(S, r2[:,1,0], yerr = r2[:,1,1], color='green', hold=True, label=names[1])
plt.errorbar(S, r2[:,2,0], yerr = r2[:,2,1], color='red', hold=True, label=names[2])
plt.errorbar(S, r2[:,3,0], yerr = r2[:,3,1], color='black', hold=True, label=names[3])
plt.errorbar(S, r2[:,4,0], yerr = r2[:,4,1], color='brown', hold=True, label=names[4])
plt.xscale('log')
plt.axhline(1, color='red', linestyle='--')
plt.xlabel('Sample size')
plt.ylabel('R^2 Score')
plt.title('Regression results on simulated data under the alternate')
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
plt.show()
Explanation: Now graphing it:
End of explanation
X = data[:, (0, 1, 2)]
Y = data[:, -1]
for i, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[i], scores.mean(), scores.std() * 2))
Explanation: Step 7: Apply on actual data
End of explanation
n_neighbors = np.arange(1, 50)
r2 = []
for n in n_neighbors:
reg = KNN(n_neighbors=n, algorithm='auto')
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
r2.append(np.array([scores.mean(), scores.std()]))
r2 = np.array(r2)
plt.errorbar(n_neighbors, r2[:,0], yerr = r2[:,1])
plt.title("Number of neighbors against R^2 for KNN Regression")
plt.xlabel("number of neighbors")
plt.ylabel("R^2")
plt.show()
print "mean r^2 maximized at: ", np.argmax(r2[:,0])+1
print "variance minimized at: ", np.argmin(r2[:,1])+1
depth = np.arange(1, 20)
r2 = []
for d in depth:
reg = RF(max_depth=d, max_features=1)
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
r2.append(np.array([scores.mean(), scores.std()]))
r2 = np.array(r2)
plt.errorbar(depth, r2[:,0], yerr = r2[:,1])
plt.title("Max depth against R^2 for RandomForestRegression")
plt.xlabel("Max depth")
plt.ylabel("R^2")
plt.show()
print "mean r^2 maximized at: ", np.argmax(r2[:,0])+1
print "variance minimized at: ", np.argmin(r2[:,1])+1
features = np.arange(1, 4)
r2 = []
for f in features:
reg = RF(max_depth=10, max_features=f)
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
r2.append(np.array([scores.mean(), scores.std()]))
print("R^2 of %s: %0.2f (+/- %0.2f)" % ('RF', scores.mean(), scores.std() * 2))
r2 = np.array(r2)
Explanation: Tune parameters for RF and KNN
End of explanation
# boost number of neighbors for KNN and max depth for random forest
regressions = [KNN(n_neighbors=29, algorithm='auto'),
RF(max_depth=10, max_features=1)]
names = ['KNN Regression', 'Random Forest Regression']
for i, reg in enumerate(regressions):
scores = cross_validation.cross_val_score(reg, X, Y, scoring='r2', cv=10)
print("R^2 of %s: %0.2f (+/- %0.2f)" % (names[i], scores.mean(), scores.std() * 2))
Explanation: Conclude that for KNN, optimal number of neighbors is around 29. For random forest, optimal depth 10, features 2.
Retry KNN and RF regressions with new params
End of explanation |
8,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial for using structure factor data as the structure factor used in the structural-color package
This tutorial describes how to add your own structor factor data to Monte Carlo calculations
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson
This file is part of the structural-color python package.
This package is free software
Step1: For the single scattering model
set parameters
Step2: Construct the structure factor data
Here, we use discrete points from the percus-yevick approximation for structure factor, as an example. In practice, you will most likely use actual structure factor data imported from your own file
Step3: plot the structure factor data and interpolated function
Step4: Calculate reflectance
Step5: plot
Step6: For the Monte Carlo model
set parameters
Step7: Construct the structure factor data
Here, we use discrete points from the percus-yevick approximation for structure factor, as an example. In practice, you will most likely use actual structure factor data imported from your own file
Step8: plot the structure factor data and interpolated function
Step9: Calculate reflectance
Step10: plot | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import structcol as sc
import structcol.refractive_index as ri
from structcol import montecarlo as mc
from structcol import detector as det
from structcol import model
from structcol import structure
%matplotlib inline
Explanation: Tutorial for using structure factor data as the structure factor used in the structural-color package
This tutorial describes how to add your own structor factor data to Monte Carlo calculations
Copyright 2016, Vinothan N. Manoharan, Victoria Hwang, Annie Stephenson
This file is part of the structural-color python package.
This package is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
This package is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with this package. If not, see http://www.gnu.org/licenses/.
End of explanation
wavelengths = sc.Quantity(np.arange(400, 800, 20), 'nm') # wavelengths
radius = sc.Quantity('0.5 um') # particle radius
volume_fraction = sc.Quantity(0.5, '') # volume fraction of particles
n_particle = ri.n('fused silica', wavelengths)
n_matrix = ri.n('vacuum', wavelengths) # called from the refractive_index module. n_matrix is the
n_medium = ri.n('vacuum', wavelengths) # space within sample. n_medium is outside the sample.
# n_particle and n_matrix can have complex indices if absorption is desired
thickness = sc.Quantity('50 um') # thickness of the sample film
Explanation: For the single scattering model
set parameters
End of explanation
qd_data = np.arange(0,75, 0.1)
s_data = structure.factor_py(qd_data, volume_fraction.magnitude)
Explanation: Construct the structure factor data
Here, we use discrete points from the percus-yevick approximation for structure factor, as an example. In practice, you will most likely use actual structure factor data imported from your own file
End of explanation
qd = np.arange(0,70, 0.1)# works up to qd = 72
s = structure.factor_data(qd, s_data, qd_data)
plt.figure()
plt.plot(qd, s, label = 'interpolated')
plt.plot(qd_data, s_data,'.', label = 'data')
plt.legend()
plt.xlabel('qd')
plt.ylabel('structure factor')
Explanation: plot the structure factor data and interpolated function
End of explanation
reflectance=np.zeros(len(wavelengths))
for i in range(len(wavelengths)):
reflectance[i],_,_,_,_ = sc.model.reflection(n_particle[i], n_matrix[i], n_medium[i], wavelengths[i],
radius, volume_fraction,
thickness=thickness,
structure_type='data',
structure_s_data=s_data,
structure_qd_data=qd_data)
Explanation: Calculate reflectance
End of explanation
plt.figure()
plt.plot(wavelengths, reflectance)
plt.ylim([0,0.1])
plt.ylabel('Reflectance')
plt.xlabel('wavelength (nm)')
Explanation: plot
End of explanation
ntrajectories = 500 # number of trajectories
nevents = 500 # number of scattering events in each trajectory
wavelengths = sc.Quantity(np.arange(400, 800, 20), 'nm') # wavelengths
radius = sc.Quantity('0.5 um') # particle radius
volume_fraction = sc.Quantity(0.5, '') # volume fraction of particles
n_particle = ri.n('fused silica', wavelengths)
n_matrix = ri.n('vacuum', wavelengths) # called from the refractive_index module. n_matrix is the
n_medium = ri.n('vacuum', wavelengths) # space within sample. n_medium is outside the sample.
# n_particle and n_matrix can have complex indices if absorption is desired
boundary = 'film' # geometry of sample, can be 'film' or 'sphere', see below for tutorial
# on sphere case
thickness = sc.Quantity('50 um') # thickness of the sample film
Explanation: For the Monte Carlo model
set parameters
End of explanation
qd_data = np.arange(0,75, 0.1)
s_data = structure.factor_py(qd_data, volume_fraction.magnitude)
Explanation: Construct the structure factor data
Here, we use discrete points from the percus-yevick approximation for structure factor, as an example. In practice, you will most likely use actual structure factor data imported from your own file
End of explanation
qd = np.arange(0,70, 0.1)# works up to qd = 72
s = structure.factor_data(qd, s_data, qd_data)
plt.figure()
plt.plot(qd, s, label = 'interpolated')
plt.plot(qd_data, s_data,'.', label = 'data')
plt.legend()
plt.xlabel('qd')
plt.ylabel('structure factor')
Explanation: plot the structure factor data and interpolated function
End of explanation
reflectance = np.zeros(wavelengths.size)
for i in range(wavelengths.size):
# calculate n_sample
n_sample = ri.n_eff(n_particle[i], n_matrix[i], volume_fraction)
# Calculate the phase function and scattering and absorption coefficients from the single scattering model
p, mu_scat, mu_abs = mc.calc_scat(radius, n_particle[i], n_sample, volume_fraction, wavelengths[i],
structure_type = 'data',
structure_s_data = s_data,
structure_qd_data = qd_data)
# Initialize the trajectories
r0, k0, W0 = mc.initialize(nevents, ntrajectories, n_medium[i], n_sample, boundary)
r0 = sc.Quantity(r0, 'um')
k0 = sc.Quantity(k0, '')
W0 = sc.Quantity(W0, '')
# Generate a matrix of all the randomly sampled angles first
sintheta, costheta, sinphi, cosphi, _, _ = mc.sample_angles(nevents, ntrajectories, p)
# Create step size distribution
step = mc.sample_step(nevents, ntrajectories, mu_scat)
# Create trajectories object
trajectories = mc.Trajectory(r0, k0, W0)
# Run photons
trajectories.absorb(mu_abs, step)
trajectories.scatter(sintheta, costheta, sinphi, cosphi)
trajectories.move(step)
reflectance[i], transmittance = det.calc_refl_trans(trajectories, thickness, n_medium[i], n_sample, boundary)
Explanation: Calculate reflectance
End of explanation
plt.figure()
plt.plot(wavelengths, reflectance)
plt.ylim([0,1])
plt.ylabel('Reflectance')
plt.xlabel('wavelength (nm)')
Explanation: plot
End of explanation |
8,869 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
Note to Amazon EC2 users
Step1: Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
Step2: Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki.
Step3: Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
Step4: Let's look at the top 10 nearest neighbors by performing the following query
Step6: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages
Step7: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data
Step8: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
Step9: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
Step10: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint
Step11: Checkpoint. Check your has_top_words function on two random articles
Step12: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint
Step13: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
Step14: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama
Step15: Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
Step16: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
Step17: The first 10 words should say
Step18: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint
Step19: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability
Step20: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
Step21: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
Step22: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 300 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many of the Wikipedia articles are 300 words or more, and both Obama and Biden are over 300 words long.
Note
Step23: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
Step24: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Moral of the story
Step25: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
Step26: Now, compute the cosine distance between the Barack Obama article and this tweet
Step27: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors | Python Code:
import graphlab
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
Explanation: Nearest Neighbors
When exploring a large set of documents -- such as Wikipedia, news articles, StackOverflow, etc. -- it can be useful to get a list of related material. To find relevant documents you typically
* Decide on a notion of similarity
* Find the documents that are most similar
In the assignment you will
* Gain intuition for different notions of similarity and practice finding similar documents.
* Explore the tradeoffs with representing documents using raw word counts and TF-IDF
* Explore the behavior of different distance metrics by looking at the Wikipedia pages most similar to President Obama’s page.
Note to Amazon EC2 users: To conserve memory, make sure to stop all the other notebooks before running this notebook.
Import necessary packages
As usual we need to first import the Python packages that we will need.
End of explanation
wiki = graphlab.SFrame('people_wiki.gl')
wiki
Explanation: Load Wikipedia dataset
We will be using the same dataset of Wikipedia pages that we used in the Machine Learning Foundations course (Course 1). Each element of the dataset consists of a link to the wikipedia article, the name of the person, and the text of the article (in lowercase).
End of explanation
wiki['word_count'] = graphlab.text_analytics.count_words(wiki['text'])
wiki
Explanation: Extract word count vectors
As we have seen in Course 1, we can extract word count vectors using a GraphLab utility function. We add this as a column in wiki.
End of explanation
model = graphlab.nearest_neighbors.create(wiki, label='name', features=['word_count'],
method='brute_force', distance='euclidean')
Explanation: Find nearest neighbors
Let's start by finding the nearest neighbors of the Barack Obama page using the word count vectors to represent the articles and Euclidean distance to measure distance. For this, again will we use a GraphLab Create implementation of nearest neighbor search.
End of explanation
model.query(wiki[wiki['name']=='Barack Obama'], label='name', k=10)
Explanation: Let's look at the top 10 nearest neighbors by performing the following query:
End of explanation
def top_words(name):
Get a table of the most frequent words in the given person's wikipedia page.
row = wiki[wiki['name'] == name]
word_count_table = row[['word_count']].stack('word_count', new_column_name=['word','count'])
return word_count_table.sort('count', ascending=False)
obama_words = top_words('Barack Obama')
obama_words
barrio_words = top_words('Francisco Barrio')
barrio_words
Explanation: All of the 10 people are politicians, but about half of them have rather tenuous connections with Obama, other than the fact that they are politicians.
Francisco Barrio is a Mexican politician, and a former governor of Chihuahua.
Walter Mondale and Don Bonker are Democrats who made their career in late 1970s.
Wynn Normington Hugh-Jones is a former British diplomat and Liberal Party official.
Andy Anstett is a former politician in Manitoba, Canada.
Nearest neighbors with raw word counts got some things right, showing all politicians in the query result, but missed finer and important details.
For instance, let's find out why Francisco Barrio was considered a close neighbor of Obama. To do this, let's look at the most frequently used words in each of Barack Obama and Francisco Barrio's pages:
End of explanation
combined_words = obama_words.join(barrio_words, on='word')
combined_words
Explanation: Let's extract the list of most frequent words that appear in both Obama's and Barrio's documents. We've so far sorted all words from Obama and Barrio's articles by their word frequencies. We will now use a dataframe operation known as join. The join operation is very useful when it comes to playing around with data: it lets you combine the content of two tables using a shared column (in this case, the word column). See the documentation for more details.
For instance, running
obama_words.join(barrio_words, on='word')
will extract the rows from both tables that correspond to the common words.
End of explanation
combined_words = combined_words.rename({'count':'Obama', 'count.1':'Barrio'})
combined_words
Explanation: Since both tables contained the column named count, SFrame automatically renamed one of them to prevent confusion. Let's rename the columns to tell which one is for which. By inspection, we see that the first column (count) is for Obama and the second (count.1) for Barrio.
End of explanation
combined_words.sort('Obama', ascending=False)
Explanation: Note. The join operation does not enforce any particular ordering on the shared column. So to obtain, say, the five common words that appear most often in Obama's article, sort the combined table by the Obama column. Don't forget ascending=False to display largest counts first.
End of explanation
common_words = ['the', 'in', 'and', 'of', 'to'] # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = word_count_vector.keys() # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return set(common_words).issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
# YOUR CODE HERE
ht = wiki[wiki['has_top_words'] == True]
len(ht)
Explanation: Quiz Question. Among the words that appear in both Barack Obama and Francisco Barrio, take the 5 that appear most frequently in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
Hint:
* Refer to the previous paragraph for finding the words that appear in both articles. Sort the common words by their frequencies in Obama's article and take the largest five.
* Each word count vector is a Python dictionary. For each word count vector in SFrame, you'd have to check if the set of the 5 common words is a subset of the keys of the word count vector. Complete the function has_top_words to accomplish the task.
- Convert the list of top 5 words into set using the syntax
set(common_words)
where common_words is a Python list. See this link if you're curious about Python sets.
- Extract the list of keys of the word count dictionary by calling the keys() method.
- Convert the list of keys into a set as well.
- Use issubset() method to check if all 5 words are among the keys.
* Now apply the has_top_words function on every row of the SFrame.
* Compute the sum of the result column to obtain the number of articles containing all the 5 top words.
End of explanation
print 'Output from your function:', has_top_words(wiki[32]['word_count'])
print 'Correct output: True'
print 'Also check the length of unique_words. It should be 167'
print 'Output from your function:', has_top_words(wiki[33]['word_count'])
print 'Correct output: False'
print 'Also check the length of unique_words. It should be 188'
Explanation: Checkpoint. Check your has_top_words function on two random articles:
End of explanation
from graphlab.toolkits.distances import euclidean as ec
bo = wiki[wiki['name']=='Barack Obama'][0]['word_count']
gwb = wiki[wiki['name']=='George W. Bush'][0]['word_count']
jb = wiki[wiki['name']=='Joe Biden'][0]['word_count']
print ec(bo, gwb)
print ec(bo, jb)
print ec(gwb, jb)
Explanation: Quiz Question. Measure the pairwise distance between the Wikipedia pages of Barack Obama, George W. Bush, and Joe Biden. Which of the three pairs has the smallest distance?
Hint: To compute the Euclidean distance between two dictionaries, use graphlab.toolkits.distances.euclidean. Refer to this link for usage.
End of explanation
gwb_words = top_words('George W. Bush')
new_combined_words = obama_words.join(gwb_words, on='word').rename({'count':'Obama', 'count.1':'Bush'}).sort('Obama', ascending=False)
print new_combined_words
Explanation: Quiz Question. Collect all words that appear both in Barack Obama and George W. Bush pages. Out of those words, find the 10 words that show up most often in Obama's page.
End of explanation
wiki['tf_idf'] = graphlab.text_analytics.tf_idf(wiki['word_count'])
model_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='euclidean')
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
Explanation: Note. Even though common words are swamping out important subtle differences, commonalities in rarer political words still matter on the margin. This is why politicians are being listed in the query result instead of musicians, for example. In the next subsection, we will introduce a different metric that will place greater emphasis on those rarer words.
TF-IDF to the rescue
Much of the perceived commonalities between Obama and Barrio were due to occurrences of extremely frequent words, such as "the", "and", and "his". So nearest neighbors is recommending plausible results sometimes for the wrong reasons.
To retrieve articles that are more relevant, we should focus more on rare words that don't happen in every article. TF-IDF (term frequency–inverse document frequency) is a feature representation that penalizes words that are too common. Let's use GraphLab Create's implementation of TF-IDF and repeat the search for the 10 nearest neighbors of Barack Obama:
End of explanation
def top_words_tf_idf(name):
row = wiki[wiki['name'] == name]
word_count_table = row[['tf_idf']].stack('tf_idf', new_column_name=['word','weight'])
return word_count_table.sort('weight', ascending=False)
obama_tf_idf = top_words_tf_idf('Barack Obama')
obama_tf_idf
schiliro_tf_idf = top_words_tf_idf('Phil Schiliro')
schiliro_tf_idf
Explanation: Let's determine whether this list makes sense.
* With a notable exception of Roland Grossenbacher, the other 8 are all American politicians who are contemporaries of Barack Obama.
* Phil Schiliro, Jesse Lee, Samantha Power, and Eric Stern worked for Obama.
Clearly, the results are more plausible with the use of TF-IDF. Let's take a look at the word vector for Obama and Schilirio's pages. Notice that TF-IDF representation assigns a weight to each word. This weight captures relative importance of that word in the document. Let us sort the words in Obama's article by their TF-IDF weights; we do the same for Schiliro's article as well.
End of explanation
obama_tf_idf.join(schiliro_tf_idf, on = 'word').rename({'weight':'Obama', 'weight.1':'Schinliro'}).sort('Obama', ascending=False)
Explanation: Using the join operation we learned earlier, try your hands at computing the common words shared by Obama's and Schiliro's articles. Sort the common words by their TF-IDF weights in Obama's document.
End of explanation
common_words = ['obama', 'law', 'democratic', 'senate', 'presidential'] # YOUR CODE HERE
def has_top_words(word_count_vector):
# extract the keys of word_count_vector and convert it to a set
unique_words = word_count_vector.keys() # YOUR CODE HERE
# return True if common_words is a subset of unique_words
# return False otherwise
return set(common_words).issubset(unique_words) # YOUR CODE HERE
wiki['has_top_words'] = wiki['word_count'].apply(has_top_words)
# use has_top_words column to answer the quiz question
# YOUR CODE HERE
len(wiki[wiki['has_top_words']==True])
Explanation: The first 10 words should say: Obama, law, democratic, Senate, presidential, president, policy, states, office, 2011.
Quiz Question. Among the words that appear in both Barack Obama and Phil Schiliro, take the 5 that have largest weights in Obama. How many of the articles in the Wikipedia dataset contain all of those 5 words?
End of explanation
bo = wiki[wiki['name']=='Barack Obama'][0]['tf_idf']
jb = wiki[wiki['name']=='Joe Biden'][0]['tf_idf']
ec(bo, jb)
Explanation: Notice the huge difference in this calculation using TF-IDF scores instead of raw word counts. We've eliminated noise arising from extremely common words.
Choosing metrics
You may wonder why Joe Biden, Obama's running mate in two presidential elections, is missing from the query results of model_tf_idf. Let's find out why. First, compute the distance between TF-IDF features of Obama and Biden.
Quiz Question. Compute the Euclidean distance between TF-IDF features of Obama and Biden. Hint: When using Boolean filter in SFrame/SArray, take the index 0 to access the first match.
End of explanation
model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=10)
Explanation: The distance is larger than the distances we found for the 10 nearest neighbors, which we repeat here for readability:
End of explanation
def compute_length(row):
return len(row['text'].split(' '))
wiki['length'] = wiki.apply(compute_length)
nearest_neighbors_euclidean = model_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_euclidean = nearest_neighbors_euclidean.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_euclidean.sort('rank')
Explanation: But one may wonder, is Biden's article that different from Obama's, more so than, say, Schiliro's? It turns out that, when we compute nearest neighbors using the Euclidean distances, we unwittingly favor short articles over long ones. Let us compute the length of each Wikipedia document, and examine the document lengths for the 100 nearest neighbors to Obama's page.
End of explanation
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size':16})
plt.tight_layout()
Explanation: To see how these document lengths compare to the lengths of other documents in the corpus, let's make a histogram of the document lengths of Obama's 100 nearest neighbors and compare to a histogram of document lengths for all documents.
End of explanation
model2_tf_idf = graphlab.nearest_neighbors.create(wiki, label='name', features=['tf_idf'],
method='brute_force', distance='cosine')
nearest_neighbors_cosine = model2_tf_idf.query(wiki[wiki['name'] == 'Barack Obama'], label='name', k=100)
nearest_neighbors_cosine = nearest_neighbors_cosine.join(wiki[['name', 'length']], on={'reference_label':'name'})
nearest_neighbors_cosine.sort('rank')
Explanation: Relative to the rest of Wikipedia, nearest neighbors of Obama are overwhemingly short, most of them being shorter than 300 words. The bias towards short articles is not appropriate in this application as there is really no reason to favor short articles over long articles (they are all Wikipedia articles, after all). Many of the Wikipedia articles are 300 words or more, and both Obama and Biden are over 300 words long.
Note: For the interest of computation time, the dataset given here contains excerpts of the articles rather than full text. For instance, the actual Wikipedia article about Obama is around 25000 words. Do not be surprised by the low numbers shown in the histogram.
Note: Both word-count features and TF-IDF are proportional to word frequencies. While TF-IDF penalizes very common words, longer articles tend to have longer TF-IDF vectors simply because they have more words in them.
To remove this bias, we turn to cosine distances:
$$
d(\mathbf{x},\mathbf{y}) = 1 - \frac{\mathbf{x}^T\mathbf{y}}{\|\mathbf{x}\| \|\mathbf{y}\|}
$$
Cosine distances let us compare word distributions of two articles of varying lengths.
Let us train a new nearest neighbor model, this time with cosine distances. We then repeat the search for Obama's 100 nearest neighbors.
End of explanation
plt.figure(figsize=(10.5,4.5))
plt.figure(figsize=(10.5,4.5))
plt.hist(wiki['length'], 50, color='k', edgecolor='None', histtype='stepfilled', normed=True,
label='Entire Wikipedia', zorder=3, alpha=0.8)
plt.hist(nearest_neighbors_euclidean['length'], 50, color='r', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (Euclidean)', zorder=10, alpha=0.8)
plt.hist(nearest_neighbors_cosine['length'], 50, color='b', edgecolor='None', histtype='stepfilled', normed=True,
label='100 NNs of Obama (cosine)', zorder=11, alpha=0.8)
plt.axvline(x=wiki['length'][wiki['name'] == 'Barack Obama'][0], color='k', linestyle='--', linewidth=4,
label='Length of Barack Obama', zorder=2)
plt.axvline(x=wiki['length'][wiki['name'] == 'Joe Biden'][0], color='g', linestyle='--', linewidth=4,
label='Length of Joe Biden', zorder=1)
plt.axis([0, 1000, 0, 0.04])
plt.legend(loc='best', prop={'size':15})
plt.title('Distribution of document length')
plt.xlabel('# of words')
plt.ylabel('Percentage')
plt.rcParams.update({'font.size': 16})
plt.tight_layout()
Explanation: From a glance at the above table, things look better. For example, we now see Joe Biden as Barack Obama's nearest neighbor! We also see Hillary Clinton on the list. This list looks even more plausible as nearest neighbors of Barack Obama.
Let's make a plot to better visualize the effect of having used cosine distance in place of Euclidean on our TF-IDF vectors.
End of explanation
sf = graphlab.SFrame({'text': ['democratic governments control law in response to popular act']})
sf['word_count'] = graphlab.text_analytics.count_words(sf['text'])
encoder = graphlab.feature_engineering.TFIDF(features=['word_count'], output_column_prefix='tf_idf')
encoder.fit(wiki)
sf = encoder.transform(sf)
sf
Explanation: Indeed, the 100 nearest neighbors using cosine distance provide a sampling across the range of document lengths, rather than just short articles like Euclidean distance provided.
Moral of the story: In deciding the features and distance measures, check if they produce results that make sense for your particular application.
Problem with cosine distances: tweets vs. long articles
Happily ever after? Not so fast. Cosine distances ignore all document lengths, which may be great in certain situations but not in others. For instance, consider the following (admittedly contrived) example.
+--------------------------------------------------------+
| +--------+ |
| One that shall not be named | Follow | |
| @username +--------+ |
| |
| Democratic governments control law in response to |
| popular act. |
| |
| 8:05 AM - 16 May 2016 |
| |
| Reply Retweet (1,332) Like (300) |
| |
+--------------------------------------------------------+
How similar is this tweet to Barack Obama's Wikipedia article? Let's transform the tweet into TF-IDF features, using an encoder fit to the Wikipedia dataset. (That is, let's treat this tweet as an article in our Wikipedia dataset and see what happens.)
End of explanation
tweet_tf_idf = sf[0]['tf_idf.word_count']
tweet_tf_idf
obama = wiki[wiki['name'] == 'Barack Obama']
obama
Explanation: Let's look at the TF-IDF vectors for this tweet and for Barack Obama's Wikipedia entry, just to visually see their differences.
End of explanation
obama_tf_idf = obama[0]['tf_idf']
graphlab.toolkits.distances.cosine(obama_tf_idf, tweet_tf_idf)
Explanation: Now, compute the cosine distance between the Barack Obama article and this tweet:
End of explanation
model2_tf_idf.query(obama, label='name', k=10)
Explanation: Let's compare this distance to the distance between the Barack Obama article and all of its Wikipedia 10 nearest neighbors:
End of explanation |
8,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time Series Filters
Step1: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
Step2: Baxter-King approximate band-pass filter
Step3: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
Step4: Christiano-Fitzgerald approximate band-pass filter | Python Code:
%matplotlib inline
from __future__ import print_function
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
index = pd.Index(sm.tsa.datetools.dates_from_range('1959Q1', '2009Q3'))
print(index)
dta.index = index
del dta['year']
del dta['quarter']
print(sm.datasets.macrodata.NOTE)
print(dta.head(10))
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
dta.realgdp.plot(ax=ax);
legend = ax.legend(loc = 'upper left');
legend.prop.set_size(20);
Explanation: Time Series Filters
End of explanation
gdp_cycle, gdp_trend = sm.tsa.filters.hpfilter(dta.realgdp)
gdp_decomp = dta[['realgdp']]
gdp_decomp["cycle"] = gdp_cycle
gdp_decomp["trend"] = gdp_trend
fig = plt.figure(figsize=(12,8))
ax = fig.add_subplot(111)
gdp_decomp[["realgdp", "trend"]]["2000-03-31":].plot(ax=ax, fontsize=16);
legend = ax.get_legend()
legend.prop.set_size(20);
Explanation: Hodrick-Prescott Filter
The Hodrick-Prescott filter separates a time-series $y_t$ into a trend $\tau_t$ and a cyclical component $\zeta_t$
$$y_t = \tau_t + \zeta_t$$
The components are determined by minimizing the following quadratic loss function
$$\min_{\{ \tau_{t}\} }\sum_{t}^{T}\zeta_{t}^{2}+\lambda\sum_{t=1}^{T}\left[\left(\tau_{t}-\tau_{t-1}\right)-\left(\tau_{t-1}-\tau_{t-2}\right)\right]^{2}$$
End of explanation
bk_cycles = sm.tsa.filters.bkfilter(dta[["infl","unemp"]])
Explanation: Baxter-King approximate band-pass filter: Inflation and Unemployment
Explore the hypothesis that inflation and unemployment are counter-cyclical.
The Baxter-King filter is intended to explictly deal with the periodicty of the business cycle. By applying their band-pass filter to a series, they produce a new series that does not contain fluctuations at higher or lower than those of the business cycle. Specifically, the BK filter takes the form of a symmetric moving average
$$y_{t}^{*}=\sum_{k=-K}^{k=K}a_ky_{t-k}$$
where $a_{-k}=a_k$ and $\sum_{k=-k}^{K}a_k=0$ to eliminate any trend in the series and render it stationary if the series is I(1) or I(2).
For completeness, the filter weights are determined as follows
$$a_{j} = B_{j}+\theta\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
$$B_{0} = \frac{\left(\omega_{2}-\omega_{1}\right)}{\pi}$$
$$B_{j} = \frac{1}{\pi j}\left(\sin\left(\omega_{2}j\right)-\sin\left(\omega_{1}j\right)\right)\text{ for }j=0,\pm1,\pm2,\dots,\pm K$$
where $\theta$ is a normalizing constant such that the weights sum to zero.
$$\theta=\frac{-\sum_{j=-K^{K}b_{j}}}{2K+1}$$
$$\omega_{1}=\frac{2\pi}{P_{H}}$$
$$\omega_{2}=\frac{2\pi}{P_{L}}$$
$P_L$ and $P_H$ are the periodicity of the low and high cut-off frequencies. Following Burns and Mitchell's work on US business cycles which suggests cycles last from 1.5 to 8 years, we use $P_L=6$ and $P_H=32$ by default.
End of explanation
fig = plt.figure(figsize=(12,10))
ax = fig.add_subplot(111)
bk_cycles.plot(ax=ax, style=['r--', 'b-']);
Explanation: We lose K observations on both ends. It is suggested to use K=12 for quarterly data.
End of explanation
print(sm.tsa.stattools.adfuller(dta['unemp'])[:3])
print(sm.tsa.stattools.adfuller(dta['infl'])[:3])
cf_cycles, cf_trend = sm.tsa.filters.cffilter(dta[["infl","unemp"]])
print(cf_cycles.head(10))
fig = plt.figure(figsize=(14,10))
ax = fig.add_subplot(111)
cf_cycles.plot(ax=ax, style=['r--','b-']);
Explanation: Christiano-Fitzgerald approximate band-pass filter: Inflation and Unemployment
The Christiano-Fitzgerald filter is a generalization of BK and can thus also be seen as weighted moving average. However, the CF filter is asymmetric about $t$ as well as using the entire series. The implementation of their filter involves the
calculations of the weights in
$$y_{t}^{*}=B_{0}y_{t}+B_{1}y_{t+1}+\dots+B_{T-1-t}y_{T-1}+\tilde B_{T-t}y_{T}+B_{1}y_{t-1}+\dots+B_{t-2}y_{2}+\tilde B_{t-1}y_{1}$$
for $t=3,4,...,T-2$, where
$$B_{j} = \frac{\sin(jb)-\sin(ja)}{\pi j},j\geq1$$
$$B_{0} = \frac{b-a}{\pi},a=\frac{2\pi}{P_{u}},b=\frac{2\pi}{P_{L}}$$
$\tilde B_{T-t}$ and $\tilde B_{t-1}$ are linear functions of the $B_{j}$'s, and the values for $t=1,2,T-1,$ and $T$ are also calculated in much the same way. $P_{U}$ and $P_{L}$ are as described above with the same interpretation.
The CF filter is appropriate for series that may follow a random walk.
End of explanation |
8,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Conway's Game of Life
Authors
Step1: Import necessary libraries
Step8: Conway Game of Life Grid Class
Step15: Conway Game of Life Cell Class
Step16: Test Text Grid
Step17: Test Animation Grid | Python Code:
from IPython.display import IFrame
IFrame('https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life',
width = 800, height = 500)
Explanation: Conway's Game of Life
Authors: Edwin Weill & Brad Green
Due Date: November 29th
This iPython notebook serves as the project code for the final project in MATH 8650 (Data Structures).
Project Description: Conway's game of life is a cellular automaton devised by John Horton Conway in 1970.
The "game" is structured so that the evolution is only based on the initial state, meaning no user input is needed. Initial configurations can be created randomly or by creating known patterns with particular properties.
A set of rules is derived that mimics growth of a colony of some biological organisms. In most cases, the "game" is played on a two-dimensional grid which contains "dead" and "living" cells. The following are a small subset of the rules that govern the evolution of the "game".
Reproduction: If a "dead" cell is surrounded by exactly 3 "living" cells, it become a "living" cell
Underpopulation: If a "living" cell is surrounded by fewer than two "living" cells, it dies.
Overpopulation: If a "living" cell is surrounded by more than three "living" cells, it dies.
Stasis: If a "living" cell is surrounded by two or three "living" cells, it survives.
Conway's Game of Life Wiki
End of explanation
import numpy as np
%pylab inline
from JSAnimation.IPython_display import display_animation, anim_to_html
from matplotlib import animation
from random import randint
from copy import deepcopy
Explanation: Import necessary libraries
End of explanation
class ConwayGOLGrid():
Represents a grid in the Conway's Game of Life problem where
each of the cells contained in the grid may be either alive or
dead for any given state.
def __init__(self, width=100, height=100, startCells=[],
optimized=True, variant="B3/S23"):
Initializes a Grid as a 2D list and comprised of Cells.
Parameters
----------
width, height: size of the board
startCells: list of cells to start as alive.
If startCells is empty, cells will spawn as alive
at a rate of 30%.
startCells should be a list of coordinates, (x,y)
optimized: determines whether or not to use data structures
to improve overall run-time.
variant: defines variant of life played. Options as follows:
B3/S23: default (Born with 3, Survives with 2 or 3)
B6/S16
B1/S12
B36/S23: Highlife
B2/S3: Seeds
B2/S
self.width, self.height = width, height
self.__optimized = optimized
self.cells = []
self.__living = set()
if variant == "B3/S23":
self.__born = [3]
self.__survives = [2, 3]
elif variant == "B6/S16":
self.__born = [3]
self.__survives= [1, 6]
elif variant == "B1/S12":
self.__born = [1]
self.__survives = [1,2]
elif variant == "B36/S23":
self.__born = [3, 6]
self.__survives = [2, 3]
elif variant == "B2/S3":
self.__born = [2]
self.__survives = [3]
elif variant == "B2/S":
self.__born = [2]
self.__survives = []
else:
print variant, " is not a valid variant. Using B3/S23."
self.__born = [3]
self.__survives = [2,3]
for x in range(self.width):
# Create a new list for 2D structure
self.cells.append([])
for y in range(self.height):
# If no startCells provided, randomly init as alive
if len(startCells) == 0 and randint(0,100) < 30:
self.cells[x].append(ConwayGOLCell(x, y, True))
self.__living.add((x,y))
else:
self.cells[x].append(ConwayGOLCell(x,y))
# Give life to all cells in the startCells list
for cell in startCells:
self.cells[cell[0]][cell[1]].spawn()
self.__living.add((cell))
def update(self):
Updates the current state of the game using the standard
Game of Life rules.
Parameters
----------
None
Returns
-------
True if there are remaining alive cells.
False otherwise.
alive = False
if not self.__optimized:
# Deep copy the list to make sure the entire board updates correctly
tempGrid = deepcopy(self.cells)
# For every cell, check the neighbors.
for x in range(self.width):
for y in range(self.height):
neighbors = self.cells[x][y].num_neighbors(self)
# Living cells stay alive with _survives # of neighbors, else die
if self.cells[x][y].is_alive():
if not (neighbors in self.__survives):
tempGrid[x][y].die()
else:
alive = True
# Non living cells come alive with 3 neighbors
else:
if neighbors in self.__born:
tempGrid[x][y].spawn()
alive = True
# Deep copy the tempGrid to prevent losing the reference when function is over
self.cells = deepcopy(tempGrid)
else:
count = [[0 for y in range(self.height)] for x in range(self.width)]
to_check = set()
# For each cell that is alive...
for cell in self.__living:
x, y = cell
to_check.add(cell)
# Grab all of its neighbors
for neighbor in self.cells[x][y].neighbors:
n_x, n_y = neighbor
# If the neighbors are valid
if ( n_x >= 0 and n_y >= 0 and
n_x < self.width and n_y < self.height):
# Then increment their count and add them to a set
count[n_x][n_y] += 1
to_check.add(neighbor)
# Then start over living.
self.__living = set()
# Above, we add 1 to the count each time a cell is touched by an alive cell.
# So we know count contains the number of alive neighbors any given cell has.
# We use this to quickly check the rules of life and add cells to living array as needed.
for cell in to_check:
x, y = cell
if self.cells[x][y].is_alive():
if not count[x][y] in self.__survives:
self.cells[x][y].die()
else:
self.__living.add(cell)
alive = True
else:
if count[x][y] in self.__born:
self.cells[x][y].spawn()
self.__living.add(cell)
alive = True
return alive
def print_text_grid(self):
Prints the current state of the board using text.
Parameters
----------
None
Returns
-------
None
for y in range(self.height):
for x in range(self.width):
if self.cells[x][y].is_alive():
print "X" ,
else:
print "." ,
print "\n"
print "\n\n"
def conway_step_test(self, X):
Game of life step using generator expressions
nbrs_count = sum(np.roll(np.roll(X, i, 0), j, 1)
for i in (-1, 0, 1) for j in (-1, 0, 1)
if (i != 0 or j != 0))
return (nbrs_count == 3) | (X & (nbrs_count == 2))
def conway_animate(self, dpi=10, frames=10,
interval=300, mode='loop'):
Animate Conway's Game of Life
Parameters
----------
dpi: (int) number of dots/inch in animation (size of board)
frames: (int) number of frames for animation
interval: (float) time between frames (ms)
mode: (string) animation mode (options: 'loop','once','reflect')
# Replace this block with the conversion of our cell data
np.random.seed(0)
X_old = np.zeros((30, 40), dtype=bool)
r = np.random.random((10, 20))
X_old[10:20, 10:30] = (r > 0.75)
# Replace X_old with new transformed data
print X_old
X = np.asarray(X_old)
X = X.astype(bool)
fig = plt.figure(figsize=(X.shape[1] * 1. / dpi, X.shape[0] * 1. / dpi),
dpi=dpi)
ax = fig.add_axes([0,0,1,1], xticks=[], yticks=[], frameon=False)
#im = ax.imshow(X)
im = ax.imshow(X, cmap=plt.cm.binary, interpolation='nearest')
im.set_clim(-0.05, 1)
def animate(i):
im.set_data(animate.X)
# Replace with self.update()
animate.X = self.conway_step_test(animate.X)
return (im,)
animate.X = X
anim = animation.FuncAnimation(fig, animate,
frames=frames, interval=interval)
return display_animation(anim, default_mode=mode)
Explanation: Conway Game of Life Grid Class
End of explanation
class ConwayGOLCell():
Represents a cell in the Conway's Game of Life problem where
a cell can either be alive or dead and the next state of the
cell is based on the states of the immediate (8) neighbors.
def __init__(self, x, y, alive=False):
Create information for the given cell including the x and
y coordinates of the cell, whether it is currently alive
or dead, it's neighbors, and its current color.
Parameters
----------
x, y: give the coordinate of the cell in grid
alive: gives current state of the cell
Returns
-------
None
self.x, self.y = x, y
self.alive = alive
self.neighbors = [(x-1,y-1), (x, y-1), (x+1, y-1),
(x-1,y ), (x+1, y ),
(x-1,y+1), (x, y+1), (x+1, y+1)]
self.color = (255,255,255)
def spawn(self):
Changes the state of a cell from dead to alive. Assumes
that the cell is dead to be changed to alive (no need to
modify if already alive).
Parameters
----------
None
Returns
-------
None
assert self.alive==False
self.alive = True
def die(self):
Changes the stat of a cell from alive to dead. Assumes
that the cell is alive to be changed to dead (no need to
modify if already dead).
Parameters
----------
None
Returns
-------
None
assert self.alive==True
self.alive = False
def is_alive(self):
Returns status of a cell.
Parameters
----------
None
Returns
-------
True if cell is alive.
return self.alive
def num_neighbors(self, grid):
Returns the number of neighbors of a cell.
Parameters
----------
grid: the ConwayGOLGrid object containing all cells
Returns
-------
number of alive neighbors
num_neighbors = 0
for cell in self.neighbors:
x,y = cell
if ( x >= 0 and x < grid.width and
y >= 0 and y < grid.height and
grid.cells[x][y].is_alive()):
num_neighbors += 1
return num_neighbors
Explanation: Conway Game of Life Cell Class
End of explanation
test_game = ConwayGOLGrid(20,20, optimized=False, variant="B2/S")
test_game.print_text_grid()
count = 0
while count < 20 and test_game.update():
count += 1
test_game.print_text_grid()
'''
while test_game.update():
if count % 10 == 0:
print "Iteration ", count
test_game.print_grid()
if count > 100:
break
count += 1
'''
print "Finsihed after ", count, "iterations"
Explanation: Test Text Grid
End of explanation
test_game2 = ConwayGOLGrid(20,20, optimized=True,variant="B2/S")
test_game.conway_animate(dpi=5, frames=20, mode='loop')
Explanation: Test Animation Grid
End of explanation |
8,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright © 2019 The TensorFlow Authors.
Step1: TensorFlow Extended (TFX) Workshop
Run this notebook in Colab
Running a simple pipeline manually in a Colab Notebook
This notebook demonstrates how to use Jupyter/Colab notebooks for TFX iterative development. Here, we walk through the Chicago Taxi example in an interactive notebook.
Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.
Orchestration
In a production deployment of TFX you will use an orchestrator such as Apache Airflow, Kubeflow, or Apache Beam. In an interactive notebook the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.
Metadata
In a production deployment of TFX you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in the /tmp directory on the Jupyter notebook or Colab server.
Setup
First, install the necessary packages, download data, import modules and set up paths.
Install TFX and TensorFlow
Note
Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
Step2: Import packages
Import necessary packages, including standard TFX component classes.
Step3: Check the versions
Step4: Download example data
Download the sample dataset for use in our TFX pipeline.
The data comes from the Taxi Trips dataset released by the City of Chicago. You will develop a binary classification model to predict whether or not customers will tip their taxi drivers more or less than 20%.
The columns in the dataset are
Step5: Take a quick look at the CSV file.
Step6: Create the InteractiveContext
An interactive context is used to provide global context when running a TFX pipeline in a notebook without using a runner or orchestrator such as Apache Airflow or Kubeflow. This style of development is only useful when developing the code for a pipeline, and cannot currently be used to deploy a working pipeline to production.
Step7: Run TFX Components Interactively
In the cells that follow you will construct TFX components and run each one interactively within the InteractiveContext to obtain ExecutionResult objects. This mirrors the process of an orchestrator running components in a TFX DAG based on when the dependencies for each component are met.
The ExampleGen Component
In any ML development process the first step when starting code development is to ingest the training and test datasets. The ExampleGen component brings data into the TFX pipeline.
Create an ExampleGen component and run it.
Step8: ExampleGen's outputs include 2 artifacts
Step9: Take a peek at the output training examples to see what they look like.
Get the URI of the output artifact representing the training examples, which is a directory
Get the list of files in this directory (all compressed TFRecord files), and create a TFRecordDataset to read these files
Iterate over the first 3 records and decode them using a TFExampleDecoder to check the results
Step10: The StatisticsGen Component
The StatisticsGen component computes descriptive statistics for your dataset. The statistics that it generates can be visualized for review, and are used for example validation and to infer a schema.
Create a StatisticsGen component and run it.
Step11: Again, let's take a peek at the output training artifact. Note that this time it is a TFRecord file containing a single record with a serialized DatasetFeatureStatisticsList protobuf
Step12: The statistics can be visualized using the tfdv.visualize_statistics() function
Step13: The SchemaGen Component
The SchemaGen component generates a schema for your data based on the statistics from StatisticsGen. It tries to infer the data types of each of your features, and the ranges of legal values for categorical features.
Create a SchemaGen component and run it.
Step14: The generated artifact is just a schema.pbtxt containing a text representation of a schema_pb2.Schema protobuf
Step15: It can be visualized using tfdv.display_schema()
Step16: The ExampleValidator Component
The ExampleValidator performs anomaly detection, based on the statistics from StatisticsGen and the schema from SchemaGen. It looks for problems such as missing values, values of the wrong type, or categorical values outside of the domain of acceptable values.
Create an ExampleValidator component and run it.
Step17: The output artifact of ExampleValidator is an anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf
Step18: This can be visualized using the tfdv.display_anomalies() function. Did it find any anomalies?
Step19: The Transform Component
The Transform component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
Define some constants and functions for both the Transform component and the Trainer component. Define them in a Python module, in this case saved to disk using the %%writefile magic command since you are working in a notebook
Step23: Now define a module containing the preprocessing_fn() function that will be passed to the Transform component
Step24: Create and run the Transform component, referring to the files that were created above.
Step25: The Transform component has 2 types of outputs
Step26: Take a peek at the transform_graph artifact. It points to a directory containing 3 subdirectories.
Step27: The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data. The transformed_metadata subdirectory contains the schema of the preprocessed data.
Take a look at some of the transformed examples and check that they are indeed processed as intended.
Step34: The Trainer Component
The Trainer component trains models using TensorFlow.
Create a Python module containing a trainer_fn function, which must return an estimator. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().
Step35: Create and run the Trainer component.
Step36: Take a peek at the trained model which was exported from Trainer.
Step37: Analyze Training with TensorBoard
Use TensorBoard to analyze the model training that was done in Trainer, and see how well our model trained.
Step38: The Evaluator Component
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is imporant in this particular use case or domain.
Create and run an Evaluator component.
Step39: Use the Evaluator results to generate model performance data which can be visualized. First create evaluation input data.
Step41: Run the analysis of a particular slice of data.
Step42: Print the slicing metrics.
Step43: Examine the output data.
Step44: The ModelValidator Component
The ModelValidator component performs validation of your candidate model compared to the previously deployed model (if any) using criteria that you define, or to a baseline value. If the new model scores better than the previous model it will be "blessed" by ModelValidator, approving it for deployment.
Step45: Examine the output of ModelValidator.
Step46: The Pusher Component
The Pusher component checks whether a model has been "blessed", and if so, deploys it to production by pushing the model to a well known file destination.
Step47: Create and run a Pusher component.
Step48: Examine the output of Pusher.
Step49: TensorFlow Serving
Now that we have a trained model that has been blessed by ModelValidator, and pushed to our deployment target by Pusher, we can load it into TensorFlow Serving and start serving inference requests.
Examine your saved model
We'll use the command line utility saved_model_cli to look at the MetaGraphDefs (the models) and SignatureDefs (the methods you can call) in our SavedModel. See this discussion of the SavedModel CLI in the TensorFlow Guide.
Step50: That tells us a lot about our model! In this case we just trained our model, so we already know the inputs and outputs, but if we didn't this would be important information. It doesn't tell us everything, but it's a great start.
Add TensorFlow Serving distribution URI as a package source
Step51: Install TensorFlow Serving
This is all you need - one command line! Please note that running TensorFlow Serving in a Docker Container is also a great option, with a lot of advantages.
Step52: Start running TensorFlow Serving
This is where we start running TensorFlow Serving and load our model. After it loads we can start making inference requests using REST. There are some important parameters
Step55: Prepare data for inference requests
Our example data is stored in a CSV file on disk.
We first have to read the file and decode the examples from CSV, and then encode these as Example protos to feed to Tensorflow Serving.
A few notes
Step57: Perform Inference on example data
Prepare the example data using the utility defined above and batch all requests together to send a single REST API call to Tensorflow Serving. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright © 2019 The TensorFlow Authors.
End of explanation
!pip install -q -U \
tensorflow==2.0.0 \
tfx==0.15.0 \
pyarrow==0.14.1
!pip install -U grpcio==1.24.3
Explanation: TensorFlow Extended (TFX) Workshop
Run this notebook in Colab
Running a simple pipeline manually in a Colab Notebook
This notebook demonstrates how to use Jupyter/Colab notebooks for TFX iterative development. Here, we walk through the Chicago Taxi example in an interactive notebook.
Working in an interactive notebook is a useful way to become familiar with the structure of a TFX pipeline. It's also useful when doing development of your own pipelines as a lightweight development environment, but you should be aware that there are differences in the way interactive notebooks are orchestrated, and how they access metadata artifacts.
Orchestration
In a production deployment of TFX you will use an orchestrator such as Apache Airflow, Kubeflow, or Apache Beam. In an interactive notebook the notebook itself is the orchestrator, running each TFX component as you execute the notebook cells.
Metadata
In a production deployment of TFX you will access metadata through the ML Metadata (MLMD) API. MLMD stores metadata properties in a database such as MySQL, and stores the metadata payloads in a persistent store such as on your filesystem. In an interactive notebook, both properties and payloads are stored in the /tmp directory on the Jupyter notebook or Colab server.
Setup
First, install the necessary packages, download data, import modules and set up paths.
Install TFX and TensorFlow
Note
Because of some of the updates to packages you must use the button at the bottom of the output of this cell to restart the runtime. Following restart, you should rerun this cell.
End of explanation
import os
import pprint
import tempfile
import urllib
import tensorflow as tf
pp = pprint.PrettyPrinter()
import tfx
from tfx.components.evaluator.component import Evaluator
from tfx.components.example_gen.csv_example_gen.component import CsvExampleGen
from tfx.components.example_validator.component import ExampleValidator
from tfx.components.model_validator.component import ModelValidator
from tfx.components.pusher.component import Pusher
from tfx.components.schema_gen.component import SchemaGen
from tfx.components.statistics_gen.component import StatisticsGen
from tfx.components.trainer.component import Trainer
from tfx.components.transform.component import Transform
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.proto import evaluator_pb2
from tfx.proto import pusher_pb2
from tfx.proto import trainer_pb2
from tfx.utils.dsl_utils import external_input
from tensorflow.core.example import example_pb2
from tensorflow_metadata.proto.v0 import anomalies_pb2
from tensorflow_metadata.proto.v0 import schema_pb2
from tensorflow_metadata.proto.v0 import statistics_pb2
import tensorflow_data_validation as tfdv
import tensorflow_transform as tft
import tensorflow_model_analysis as tfma
Explanation: Import packages
Import necessary packages, including standard TFX component classes.
End of explanation
print('TensorFlow version: {}'.format(tf.__version__))
print('TFX version: {}'.format(tfx.__version__))
print('TFT version: {}'.format(tft.__version__))
print('TFDV version: {}'.format(tfdv.__version__))
print('TFMA version: {}'.format(tfma.VERSION_STRING))
Explanation: Check the versions
End of explanation
# Download the example data.
_data_root = tempfile.mkdtemp(prefix='tfx-data')
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/data/simple/data.csv'
_data_filepath = os.path.join(_data_root, "data.csv")
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
Explanation: Download example data
Download the sample dataset for use in our TFX pipeline.
The data comes from the Taxi Trips dataset released by the City of Chicago. You will develop a binary classification model to predict whether or not customers will tip their taxi drivers more or less than 20%.
The columns in the dataset are:
<table>
<tr><td>pickup_community_area</td><td>fare</td><td>trip_start_month</td></tr>
<tr><td>trip_start_hour</td><td>trip_start_day</td><td>trip_start_timestamp</td></tr>
<tr><td>pickup_latitude</td><td>pickup_longitude</td><td>dropoff_latitude</td></tr>
<tr><td>dropoff_longitude</td><td>trip_miles</td><td>pickup_census_tract</td></tr>
<tr><td>dropoff_census_tract</td><td>payment_type</td><td>company</td></tr>
<tr><td>trip_seconds</td><td>dropoff_community_area</td><td>tips</td></tr>
</table>
End of explanation
!head {_data_filepath}
Explanation: Take a quick look at the CSV file.
End of explanation
# Here, we create an InteractiveContext using default parameters. This will
# use a temporary directory with an ephemeral ML Metadata database instance.
# To use your own pipeline root or database, the optional properties
# `pipeline_root` and `metadata_connection_config` may be passed to
# InteractiveContext.
context = InteractiveContext()
Explanation: Create the InteractiveContext
An interactive context is used to provide global context when running a TFX pipeline in a notebook without using a runner or orchestrator such as Apache Airflow or Kubeflow. This style of development is only useful when developing the code for a pipeline, and cannot currently be used to deploy a working pipeline to production.
End of explanation
# Use the packaged CSV input data.
input_data = external_input(_data_root)
# Brings data into the pipeline or otherwise joins/converts training data.
example_gen = CsvExampleGen(input=input_data)
context.run(example_gen)
Explanation: Run TFX Components Interactively
In the cells that follow you will construct TFX components and run each one interactively within the InteractiveContext to obtain ExecutionResult objects. This mirrors the process of an orchestrator running components in a TFX DAG based on when the dependencies for each component are met.
The ExampleGen Component
In any ML development process the first step when starting code development is to ingest the training and test datasets. The ExampleGen component brings data into the TFX pipeline.
Create an ExampleGen component and run it.
End of explanation
for artifact in example_gen.outputs['examples'].get():
print(artifact.split, artifact.uri)
Explanation: ExampleGen's outputs include 2 artifacts: the training examples and the eval examples (by default, split 2/3 training, 1/3 eval):
End of explanation
train_uri = example_gen.outputs['examples'].get()[0].uri
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
decoder = tfdv.TFExampleDecoder()
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = decoder.decode(serialized_example)
pp.pprint(example)
Explanation: Take a peek at the output training examples to see what they look like.
Get the URI of the output artifact representing the training examples, which is a directory
Get the list of files in this directory (all compressed TFRecord files), and create a TFRecordDataset to read these files
Iterate over the first 3 records and decode them using a TFExampleDecoder to check the results:
End of explanation
# Computes statistics over data for visualization and example validation.
statistics_gen = StatisticsGen(
input_data=example_gen.outputs['examples'])
context.run(statistics_gen)
Explanation: The StatisticsGen Component
The StatisticsGen component computes descriptive statistics for your dataset. The statistics that it generates can be visualized for review, and are used for example validation and to infer a schema.
Create a StatisticsGen component and run it.
End of explanation
train_uri = statistics_gen.outputs['statistics'].get()[0].uri
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
dataset = tf.data.TFRecordDataset(tfrecord_filenames)
for tfrecord in dataset.take(1):
serialized_example = tfrecord.numpy()
stats = statistics_pb2.DatasetFeatureStatisticsList()
stats.ParseFromString(serialized_example)
Explanation: Again, let's take a peek at the output training artifact. Note that this time it is a TFRecord file containing a single record with a serialized DatasetFeatureStatisticsList protobuf:
End of explanation
tfdv.visualize_statistics(stats)
Explanation: The statistics can be visualized using the tfdv.visualize_statistics() function:
End of explanation
# Generates schema based on statistics files.
infer_schema = SchemaGen(statistics=statistics_gen.outputs['statistics'])
context.run(infer_schema)
Explanation: The SchemaGen Component
The SchemaGen component generates a schema for your data based on the statistics from StatisticsGen. It tries to infer the data types of each of your features, and the ranges of legal values for categorical features.
Create a SchemaGen component and run it.
End of explanation
train_uri = infer_schema.outputs['schema'].get()[0].uri
schema_filename = os.path.join(train_uri, "schema.pbtxt")
schema = tfx.utils.io_utils.parse_pbtxt_file(file_name=schema_filename,
message=schema_pb2.Schema())
Explanation: The generated artifact is just a schema.pbtxt containing a text representation of a schema_pb2.Schema protobuf:
End of explanation
tfdv.display_schema(schema)
Explanation: It can be visualized using tfdv.display_schema():
End of explanation
# Performs anomaly detection based on statistics and data schema.
validate_stats = ExampleValidator(
statistics=statistics_gen.outputs['statistics'],
schema=infer_schema.outputs['schema'])
context.run(validate_stats)
Explanation: The ExampleValidator Component
The ExampleValidator performs anomaly detection, based on the statistics from StatisticsGen and the schema from SchemaGen. It looks for problems such as missing values, values of the wrong type, or categorical values outside of the domain of acceptable values.
Create an ExampleValidator component and run it.
End of explanation
train_uri = validate_stats.outputs['anomalies'].get()[0].uri
anomalies_filename = os.path.join(train_uri, "anomalies.pbtxt")
anomalies = tfx.utils.io_utils.parse_pbtxt_file(
file_name=anomalies_filename,
message=anomalies_pb2.Anomalies())
Explanation: The output artifact of ExampleValidator is an anomalies.pbtxt file describing an anomalies_pb2.Anomalies protobuf:
End of explanation
tfdv.display_anomalies(anomalies)
Explanation: This can be visualized using the tfdv.display_anomalies() function. Did it find any anomalies?
End of explanation
_constants_module_file = 'chicago_taxi_constants.py'
%%writefile {_constants_module_file}
# Categorical features are assumed to each have a maximum value in the dataset.
MAX_CATEGORICAL_FEATURE_VALUES = [24, 31, 12]
CATEGORICAL_FEATURE_KEYS = [
'trip_start_hour', 'trip_start_day', 'trip_start_month',
'pickup_census_tract', 'dropoff_census_tract', 'pickup_community_area',
'dropoff_community_area'
]
DENSE_FLOAT_FEATURE_KEYS = ['trip_miles', 'fare', 'trip_seconds']
# Number of buckets used by tf.transform for encoding each feature.
FEATURE_BUCKET_COUNT = 10
BUCKET_FEATURE_KEYS = [
'pickup_latitude', 'pickup_longitude', 'dropoff_latitude',
'dropoff_longitude'
]
# Number of vocabulary terms used for encoding VOCAB_FEATURES by tf.transform
VOCAB_SIZE = 1000
# Count of out-of-vocab buckets in which unrecognized VOCAB_FEATURES are hashed.
OOV_SIZE = 10
VOCAB_FEATURE_KEYS = [
'payment_type',
'company',
]
# Keys
LABEL_KEY = 'tips'
FARE_KEY = 'fare'
def transformed_name(key):
return key + '_xf'
Explanation: The Transform Component
The Transform component performs data transformations and feature engineering. The results include an input TensorFlow graph which is used during both training and serving to preprocess the data before training or inference. This graph becomes part of the SavedModel that is the result of model training. Since the same input graph is used for both training and serving, the preprocessing will always be the same, and only needs to be written once.
The Transform component requires more code than many other components because of the arbitrary complexity of the feature engineering that you may need for the data and/or model that you're working with. It requires code files to be available which define the processing needed.
Define some constants and functions for both the Transform component and the Trainer component. Define them in a Python module, in this case saved to disk using the %%writefile magic command since you are working in a notebook:
End of explanation
_transform_module_file = 'chicago_taxi_transform.py'
%%writefile {_transform_module_file}
import tensorflow_transform as tft
import tensorflow as tf
from chicago_taxi_constants import *
def _transformed_names(keys):
return [transformed_name(key) for key in keys]
# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _gzip_reader_fn(filenames):
Small utility returning a record reader that can read gzip'ed files.
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _fill_in_missing(x):
Replace missing values in a SparseTensor.
Fills in missing values of `x` with '' or 0, and converts to a dense tensor.
Args:
x: A `SparseTensor` of rank 2. Its dense shape should have size at most 1
in the second dimension.
Returns:
A rank 1 tensor where missing values of `x` have been filled in.
default_value = '' if x.dtype == tf.string else 0
return tf.squeeze(
tf.sparse.to_dense(
tf.SparseTensor(x.indices, x.values, [x.dense_shape[0], 1]),
default_value),
axis=1)
def preprocessing_fn(inputs):
tf.transform's callback function for preprocessing inputs.
Args:
inputs: map from feature keys to raw not-yet-transformed features.
Returns:
Map from string feature key to transformed feature operations.
outputs = {}
for key in DENSE_FLOAT_FEATURE_KEYS:
# Preserve this feature as a dense float, setting nan's to the mean.
outputs[transformed_name(key)] = tft.scale_to_z_score(
_fill_in_missing(inputs[key]))
for key in VOCAB_FEATURE_KEYS:
# Build a vocabulary for this feature.
outputs[transformed_name(key)] = tft.compute_and_apply_vocabulary(
_fill_in_missing(inputs[key]),
top_k=VOCAB_SIZE,
num_oov_buckets=OOV_SIZE)
for key in BUCKET_FEATURE_KEYS:
outputs[transformed_name(key)] = tft.bucketize(
_fill_in_missing(inputs[key]), FEATURE_BUCKET_COUNT,
always_return_num_quantiles=False)
for key in CATEGORICAL_FEATURE_KEYS:
outputs[transformed_name(key)] = _fill_in_missing(inputs[key])
# Was this passenger a big tipper?
taxi_fare = _fill_in_missing(inputs[FARE_KEY])
tips = _fill_in_missing(inputs[LABEL_KEY])
outputs[transformed_name(LABEL_KEY)] = tf.where(
tf.math.is_nan(taxi_fare),
tf.cast(tf.zeros_like(taxi_fare), tf.int64),
# Test if the tip was > 20% of the fare.
tf.cast(
tf.greater(tips, tf.multiply(taxi_fare, tf.constant(0.2))), tf.int64))
return outputs
Explanation: Now define a module containing the preprocessing_fn() function that will be passed to the Transform component:
End of explanation
# Performs transformations and feature engineering in training and serving.
transform = Transform(
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
module_file=_transform_module_file)
context.run(transform)
Explanation: Create and run the Transform component, referring to the files that were created above.
End of explanation
transform.outputs
Explanation: The Transform component has 2 types of outputs:
* transform_graph is the graph that can perform the preprocessing operations (this graph will be included in the serving and evaluation models).
* transformed_examples represents the preprocessed training and evaluation data.
End of explanation
train_uri = transform.outputs['transform_graph'].get()[0].uri
os.listdir(train_uri)
Explanation: Take a peek at the transform_graph artifact. It points to a directory containing 3 subdirectories.
End of explanation
train_uri = transform.outputs['transformed_examples'].get()[1].uri
tfrecord_filenames = [os.path.join(train_uri, name)
for name in os.listdir(train_uri)]
dataset = tf.data.TFRecordDataset(tfrecord_filenames, compression_type="GZIP")
decoder = tfdv.TFExampleDecoder()
for tfrecord in dataset.take(3):
serialized_example = tfrecord.numpy()
example = decoder.decode(serialized_example)
pp.pprint(example)
Explanation: The transform_fn subdirectory contains the actual preprocessing graph. The metadata subdirectory contains the schema of the original data. The transformed_metadata subdirectory contains the schema of the preprocessed data.
Take a look at some of the transformed examples and check that they are indeed processed as intended.
End of explanation
# Setup paths.
_trainer_module_file = 'chicago_taxi_trainer.py'
%%writefile {_trainer_module_file}
import tensorflow as tf
import tensorflow_model_analysis as tfma
import tensorflow_transform as tft
from tensorflow_transform.tf_metadata import schema_utils
from chicago_taxi_constants import *
def transformed_names(keys):
return [transformed_name(key) for key in keys]
# Tf.Transform considers these features as "raw"
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _gzip_reader_fn(filenames):
Small utility returning a record reader that can read gzip'ed files.
return tf.data.TFRecordDataset(
filenames,
compression_type='GZIP')
def _build_estimator(config, hidden_units=None, warm_start_from=None):
Build an estimator for predicting taxi tips
Args:
config: tf.estimator.RunConfig defining the runtime environment for the
estimator (including model_dir).
hidden_units: [int], the layer sizes of the DNN (input layer first)
warm_start_from: Optional directory to warm start from.
Returns:
The estimator that will be used for training and eval.
real_valued_columns = [
tf.feature_column.numeric_column(key, shape=())
for key in transformed_names(DENSE_FLOAT_FEATURE_KEYS)
]
categorical_columns = [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=VOCAB_SIZE + OOV_SIZE, default_value=0)
for key in transformed_names(VOCAB_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key, num_buckets=FEATURE_BUCKET_COUNT, default_value=0)
for key in transformed_names(BUCKET_FEATURE_KEYS)
]
categorical_columns += [
tf.feature_column.categorical_column_with_identity(
key,
num_buckets=num_buckets,
default_value=0) for key, num_buckets in zip(
transformed_names(CATEGORICAL_FEATURE_KEYS),
MAX_CATEGORICAL_FEATURE_VALUES)
]
return tf.estimator.DNNLinearCombinedRegressor(
config=config,
linear_feature_columns=categorical_columns,
dnn_feature_columns=real_valued_columns,
dnn_hidden_units=hidden_units or [100, 70, 50, 25],
warm_start_from=warm_start_from)
def _example_serving_receiver_fn(tf_transform_graph, schema):
Build the serving in inputs.
Args:
tf_transform_graph: A TFTransformOutput.
schema: the schema of the input data.
Returns:
Tensorflow graph which parses examples, applying tf-transform to them.
raw_feature_spec = _get_raw_feature_spec(schema)
raw_feature_spec.pop(LABEL_KEY)
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
transformed_features = tf_transform_graph.transform_raw_features(
serving_input_receiver.features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
def _eval_input_receiver_fn(tf_transform_graph, schema):
Build everything needed for the tf-model-analysis to run the model.
Args:
tf_transform_graph: A TFTransformOutput.
schema: the schema of the input data.
Returns:
EvalInputReceiver function, which contains:
- Tensorflow graph which parses raw untransformed features, applies the
tf-transform preprocessing operators.
- Set of raw, untransformed features.
- Label against which predictions will be compared.
# Notice that the inputs are raw features, not transformed features here.
raw_feature_spec = _get_raw_feature_spec(schema)
serialized_tf_example = tf.compat.v1.placeholder(
dtype=tf.string, shape=[None], name='input_example_tensor')
# Add a parse_example operator to the tensorflow graph, which will parse
# raw, untransformed, tf examples.
features = tf.io.parse_example(serialized_tf_example, raw_feature_spec)
# Now that we have our raw examples, process them through the tf-transform
# function computed during the preprocessing step.
transformed_features = tf_transform_graph.transform_raw_features(
features)
# The key name MUST be 'examples'.
receiver_tensors = {'examples': serialized_tf_example}
# NOTE: Model is driven by transformed features (since training works on the
# materialized output of TFT, but slicing will happen on raw features.
features.update(transformed_features)
return tfma.export.EvalInputReceiver(
features=features,
receiver_tensors=receiver_tensors,
labels=transformed_features[transformed_name(LABEL_KEY)])
def _input_fn(filenames, tf_transform_graph, batch_size=200):
Generates features and labels for training or evaluation.
Args:
filenames: [str] list of CSV files to read data from.
tf_transform_graph: A TFTransformOutput.
batch_size: int First dimension size of the Tensors returned by input_fn
Returns:
A (features, indices) tuple where features is a dictionary of
Tensors, and indices is a single Tensor of label indices.
transformed_feature_spec = (
tf_transform_graph.transformed_feature_spec().copy())
dataset = tf.data.experimental.make_batched_features_dataset(
filenames, batch_size, transformed_feature_spec, reader=_gzip_reader_fn)
transformed_features = dataset.make_one_shot_iterator().get_next()
# We pop the label because we do not want to use it as a feature while we're
# training.
return transformed_features, transformed_features.pop(
transformed_name(LABEL_KEY))
# TFX will call this function
def trainer_fn(hparams, schema):
Build the estimator using the high level API.
Args:
hparams: Holds hyperparameters used to train the model as name/value pairs.
schema: Holds the schema of the training examples.
Returns:
A dict of the following:
- estimator: The estimator that will be used for training and eval.
- train_spec: Spec for training.
- eval_spec: Spec for eval.
- eval_input_receiver_fn: Input function for eval.
# Number of nodes in the first layer of the DNN
first_dnn_layer_size = 100
num_dnn_layers = 4
dnn_decay_factor = 0.7
train_batch_size = 40
eval_batch_size = 40
tf_transform_graph = tft.TFTransformOutput(hparams.transform_output)
train_input_fn = lambda: _input_fn(
hparams.train_files,
tf_transform_graph,
batch_size=train_batch_size)
eval_input_fn = lambda: _input_fn(
hparams.eval_files,
tf_transform_graph,
batch_size=eval_batch_size)
train_spec = tf.estimator.TrainSpec(
train_input_fn,
max_steps=hparams.train_steps)
serving_receiver_fn = lambda: _example_serving_receiver_fn(
tf_transform_graph, schema)
exporter = tf.estimator.FinalExporter('chicago-taxi', serving_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input_fn,
steps=hparams.eval_steps,
exporters=[exporter],
name='chicago-taxi-eval')
run_config = tf.estimator.RunConfig(
save_checkpoints_steps=999, keep_checkpoint_max=1)
run_config = run_config.replace(model_dir=hparams.serving_model_dir)
estimator = _build_estimator(
# Construct layers sizes with exponetial decay
hidden_units=[
max(2, int(first_dnn_layer_size * dnn_decay_factor**i))
for i in range(num_dnn_layers)
],
config=run_config,
warm_start_from=hparams.warm_start_from)
# Create an input receiver for TFMA processing
receiver_fn = lambda: _eval_input_receiver_fn(
tf_transform_graph, schema)
return {
'estimator': estimator,
'train_spec': train_spec,
'eval_spec': eval_spec,
'eval_input_receiver_fn': receiver_fn
}
Explanation: The Trainer Component
The Trainer component trains models using TensorFlow.
Create a Python module containing a trainer_fn function, which must return an estimator. If you prefer creating a Keras model, you can do so and then convert it to an estimator using keras.model_to_estimator().
End of explanation
# Uses user-provided Python function that implements a model using TensorFlow.
trainer = Trainer(
module_file=_trainer_module_file,
transformed_examples=transform.outputs['transformed_examples'],
schema=infer_schema.outputs['schema'],
transform_graph=transform.outputs['transform_graph'],
train_args=trainer_pb2.TrainArgs(num_steps=10000),
eval_args=trainer_pb2.EvalArgs(num_steps=5000))
context.run(trainer)
Explanation: Create and run the Trainer component.
End of explanation
train_uri = trainer.outputs['model'].get()[0].uri
serving_model_path = os.path.join(train_uri, 'serving_model_dir', 'export', 'chicago-taxi')
latest_serving_model_path = os.path.join(serving_model_path, max(os.listdir(serving_model_path)))
exported_model = tf.saved_model.load(latest_serving_model_path)
exported_model.graph.get_operations()[:10] + ["..."]
Explanation: Take a peek at the trained model which was exported from Trainer.
End of explanation
%load_ext tensorboard
%tensorboard --bind_all --logdir {os.path.join(train_uri, 'serving_model_dir')}
Explanation: Analyze Training with TensorBoard
Use TensorBoard to analyze the model training that was done in Trainer, and see how well our model trained.
End of explanation
# Uses TFMA to compute a evaluation statistics over features of a model.
model_analyzer = Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
feature_slicing_spec=evaluator_pb2.FeatureSlicingSpec(specs=[
evaluator_pb2.SingleSlicingSpec(
column_for_slicing=['weekday'])
]))
context.run(model_analyzer)
model_analyzer.outputs
Explanation: The Evaluator Component
The Evaluator component analyzes model performance using the TensorFlow Model Analysis library. It runs inference requests on particular subsets of the test dataset, based on which slices are defined by the developer. Knowing which slices should be analyzed requires domain knowledge of what is imporant in this particular use case or domain.
Create and run an Evaluator component.
End of explanation
import csv
BASE_DIR = tempfile.mkdtemp()
reader = csv.DictReader(open(_data_filepath))
examples = []
for line in reader:
example = tf.train.Example()
for feature in schema.feature:
key = feature.name
if len(line[key]) > 0:
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = [float(line[key])]
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = [int(line[key])]
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = [line[key].encode('utf8')]
else:
if feature.type == schema_pb2.FLOAT:
example.features.feature[key].float_list.value[:] = []
elif feature.type == schema_pb2.INT:
example.features.feature[key].int64_list.value[:] = []
elif feature.type == schema_pb2.BYTES:
example.features.feature[key].bytes_list.value[:] = []
examples.append(example)
TFRecord_file = os.path.join(BASE_DIR, 'train_data.rio')
with tf.io.TFRecordWriter(TFRecord_file) as writer:
for example in examples:
writer.write(example.SerializeToString())
writer.flush()
writer.close()
!ls {TFRecord_file}
Explanation: Use the Evaluator results to generate model performance data which can be visualized. First create evaluation input data.
End of explanation
def run_and_render(eval_model=None, slice_list=None, slice_idx=0):
Runs the model analysis and renders the slicing metrics
Args:
eval_model: An instance of tf.saved_model saved with evaluation data
slice_list: A list of tfma.slicer.SingleSliceSpec giving the slices
slice_idx: An integer index into slice_list specifying the slice to use
Returns:
A SlicingMetricsViewer object if in Jupyter notebook; None if in Colab.
eval_result = tfma.run_model_analysis(eval_shared_model=eval_model,
data_location=TFRecord_file,
file_format='tfrecords',
slice_spec=slice_list,
output_path='sample_data',
extractors=None)
return tfma.view.render_slicing_metrics(eval_result, slicing_spec=slice_list[slice_idx] if slice_list else None)
# Load the TFMA results for the first training run
# This will take a minute
eval_model_base_dir_0 = os.path.join(train_uri, 'eval_model_dir')
eval_model_dir_0 = os.path.join(eval_model_base_dir_0,
max(os.listdir(eval_model_base_dir_0)))
eval_shared_model_0 = tfma.default_eval_shared_model(
eval_saved_model_path=eval_model_dir_0)
# Slice our data by the trip_start_hour feature
slices = [tfma.slicer.SingleSliceSpec(columns=['trip_start_hour'])]
run_and_render(eval_model=eval_shared_model_0, slice_list=slices, slice_idx=0)
Explanation: Run the analysis of a particular slice of data.
End of explanation
evaluation_uri = model_analyzer.outputs['output'].get()[0].uri
eval_result = tfma.load_eval_result(evaluation_uri)
print('{}\n\nslicing_metrics:\n'.format(eval_result))
for metric in eval_result.slicing_metrics:
pp.pprint(metric)
Explanation: Print the slicing metrics.
End of explanation
eval_path_uri = model_analyzer.outputs['output'].get()[0].uri
tfrecord_filenames = [os.path.join(eval_path_uri, name)
for name in os.listdir(eval_path_uri)]
pp.pprint(tfrecord_filenames)
dataset = tf.data.TFRecordDataset(tfrecord_filenames)
pp.pprint(dataset)
Explanation: Examine the output data.
End of explanation
# Performs quality validation of a candidate model (compared to a baseline).
model_validator = ModelValidator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'])
context.run(model_validator)
Explanation: The ModelValidator Component
The ModelValidator component performs validation of your candidate model compared to the previously deployed model (if any) using criteria that you define, or to a baseline value. If the new model scores better than the previous model it will be "blessed" by ModelValidator, approving it for deployment.
End of explanation
model_validator.outputs
blessing_uri = model_validator.outputs['blessing'].get()[0].uri
!ls -l {blessing_uri}
Explanation: Examine the output of ModelValidator.
End of explanation
# Setup serving path
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/chicago_taxi_simple')
Explanation: The Pusher Component
The Pusher component checks whether a model has been "blessed", and if so, deploys it to production by pushing the model to a well known file destination.
End of explanation
# Checks whether the model passed the validation steps and pushes the model
# to a file destination if check passed.
pusher = Pusher(
model=trainer.outputs['model'],
model_blessing=model_validator.outputs['blessing'],
push_destination=pusher_pb2.PushDestination(
filesystem=pusher_pb2.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
context.run(pusher)
Explanation: Create and run a Pusher component.
End of explanation
pusher.outputs
push_uri = pusher.outputs['pushed_model'].get()[0].uri
latest_version = max(os.listdir(push_uri))
latest_version_path = os.path.join(push_uri, latest_version)
model = tf.saved_model.load(latest_version_path)
for item in model.signatures.items():
pp.pprint(item)
Explanation: Examine the output of Pusher.
End of explanation
latest_pushed_model = os.path.join(_serving_model_dir, max(os.listdir(_serving_model_dir)))
!saved_model_cli show --dir {latest_pushed_model} --all
Explanation: TensorFlow Serving
Now that we have a trained model that has been blessed by ModelValidator, and pushed to our deployment target by Pusher, we can load it into TensorFlow Serving and start serving inference requests.
Examine your saved model
We'll use the command line utility saved_model_cli to look at the MetaGraphDefs (the models) and SignatureDefs (the methods you can call) in our SavedModel. See this discussion of the SavedModel CLI in the TensorFlow Guide.
End of explanation
# This is the same as you would do from your command line, but without the [arch=amd64], and no sudo
# You would instead do:
# echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list && \
# curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
!echo "deb http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list && \
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
!apt update
Explanation: That tells us a lot about our model! In this case we just trained our model, so we already know the inputs and outputs, but if we didn't this would be important information. It doesn't tell us everything, but it's a great start.
Add TensorFlow Serving distribution URI as a package source:
We're preparing to install TensorFlow Serving using Aptitude since this Colab runs in a Debian environment. We'll add the tensorflow-model-server package to the list of packages that Aptitude knows about. Note that we're running as root.
Note: This example is running TensorFlow Serving natively, but you can also run it in a Docker container, which is one of the easiest ways to get started using TensorFlow Serving.
End of explanation
!apt-get install tensorflow-model-server
Explanation: Install TensorFlow Serving
This is all you need - one command line! Please note that running TensorFlow Serving in a Docker Container is also a great option, with a lot of advantages.
End of explanation
os.environ["MODEL_DIR"] = os.path.split(latest_pushed_model)[0]
%%bash --bg
nohup tensorflow_model_server \
--rest_api_port=8501 \
--model_name=chicago_taxi_simple \
--model_base_path="${MODEL_DIR}" >server.log 2>&1
!tail server.log
Explanation: Start running TensorFlow Serving
This is where we start running TensorFlow Serving and load our model. After it loads we can start making inference requests using REST. There are some important parameters:
rest_api_port: The port that you'll use for REST requests.
model_name: You'll use this in the URL of REST requests. It can be anything.
model_base_path: This is the path to the directory where you've saved your model. Note that this base_path should not include the model version directory, which is why we split it off below.
End of explanation
import base64
import json
import requests
from tensorflow_transform.tf_metadata import dataset_metadata
from tensorflow_transform.tf_metadata import dataset_schema
from tensorflow_transform.tf_metadata import schema_utils
from tensorflow_transform import coders as tft_coders
from chicago_taxi_constants import *
def _get_raw_feature_spec(schema):
return schema_utils.schema_as_feature_spec(schema).feature_spec
def _make_proto_coder(schema):
raw_feature_spec = _get_raw_feature_spec(schema)
raw_schema = dataset_schema.from_feature_spec(raw_feature_spec)
return tft_coders.ExampleProtoCoder(raw_schema)
def _make_csv_coder(schema, column_names):
Return a coder for tf.transform to read csv files.
raw_feature_spec = _get_raw_feature_spec(schema)
parsing_schema = dataset_schema.from_feature_spec(raw_feature_spec)
return tft_coders.CsvCoder(column_names, parsing_schema)
def make_serialized_examples(examples_csv_file, num_examples, schema):
Parses examples from CSV file and returns seralized proto examples.
filtered_features = [
feature for feature in schema.feature if feature.name != LABEL_KEY
]
del schema.feature[:]
schema.feature.extend(filtered_features)
columns = tfx.utils.io_utils.load_csv_column_names(examples_csv_file)
csv_coder = _make_csv_coder(schema, columns)
proto_coder = _make_proto_coder(schema)
input_file = open(examples_csv_file, 'r')
input_file.readline() # skip header line
serialized_examples = []
for _ in range(num_examples):
one_line = input_file.readline()
if not one_line:
print('End of example file reached')
break
one_example = csv_coder.decode(one_line)
serialized_example = proto_coder.encode(one_example)
serialized_examples.append(serialized_example)
return serialized_examples
Explanation: Prepare data for inference requests
Our example data is stored in a CSV file on disk.
We first have to read the file and decode the examples from CSV, and then encode these as Example protos to feed to Tensorflow Serving.
A few notes:
The regress and classify APIs are higher-level and thus encouraged to be used over predict - here we use the predict API to showcase the more involved route.
While the regress and classify APIs expect and can parse tf.Example, the predict API expects arbitrary TensorProto. This means we will have to construct the tf.Example proto using coders from Tensorflow Transform.
The REST API surface accepts JSON, which uses UTF-8 encoding. Thus to access the model via REST, we will encode our serialized tf.Example using Base64.
This is quite complicated and in general, if using the predict API, you should strongly consider using the gRPC API surface.
End of explanation
def do_inference(server_addr, model_name, serialized_examples):
Sends requests to the model and prints the results.
Args:
server_addr: network address of model server in "host:port" format
model_name: name of the model as understood by the model server
serialized_examples: serialized examples of data to do inference on
parsed_server_addr = server_addr.split(':')
host=parsed_server_addr[0]
port=parsed_server_addr[1]
json_examples = []
for serialized_example in serialized_examples:
# The encoding follows the guidelines in:
# https://www.tensorflow.org/tfx/serving/api_rest
example_bytes = base64.b64encode(serialized_example).decode('utf-8')
predict_request = '{ "b64": "%s" }' % example_bytes
json_examples.append(predict_request)
json_request = '{ "instances": [' + ','.join(map(str, json_examples)) + ']}'
server_url = 'http://' + host + ':' + port + '/v1/models/' + model_name + ':predict'
response = requests.post(
server_url, data=json_request, timeout=5.0)
response.raise_for_status()
prediction = response.json()
print(json.dumps(prediction, indent=4))
serialized_examples = make_serialized_examples(
examples_csv_file=_data_filepath,
num_examples=3,
schema=schema)
do_inference(server_addr='127.0.0.1:8501',
model_name='chicago_taxi_simple',
serialized_examples=serialized_examples)
Explanation: Perform Inference on example data
Prepare the example data using the utility defined above and batch all requests together to send a single REST API call to Tensorflow Serving.
End of explanation |
8,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Source
Step1: Load Data
Step2: Convert Spark Dataframe to Pandas Dataframe
Step3: Verctorize the features
Step4: Fit Linear Regression Model
Step5: View model summary
Step6: Predict
Step7: Evaluate
Step8: Build a pipeline
Step9: Save the pipeline to disk to persist the model
Step10: Load the persisted model from the disk
Step11: Tune the model | Python Code:
!ls -ltr /data
spark
Explanation: Data Source: https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant
Features consist of hourly average ambient variables
Temperature (T) in the range 1.81°C and 37.11°C,
Ambient Pressure (AP) in the range 992.89-1033.30 milibar,
Relative Humidity (RH) in the range 25.56% to 100.16%
Exhaust Vacuum (V) in teh range 25.36-81.56 cm Hg
Net hourly electrical energy output (EP) 420.26-495.76 MW
The averages are taken from various sensors located around the plant that record the ambient variables every second. The variables are given without normalization.
Dataset Information:
The dataset contains 9568 data points collected from a Combined Cycle Power Plant over 6 years (2006-2011), when the power plant was set to work with full load. Features consist of hourly average ambient variables Temperature (T), Ambient Pressure (AP), Relative Humidity (RH) and Exhaust Vacuum (V) to predict the net hourly electrical energy output (EP) of the plant.
A combined cycle power plant (CCPP) is composed of gas turbines (GT), steam turbines (ST) and heat recovery steam generators. In a CCPP, the electricity is generated by gas and steam turbines, which are combined in one cycle, and is transferred from one turbine to another. While the Vacuum is colected from and has effect on the Steam Turbine, he other three of the ambient variables effect the GT performance.
End of explanation
df = spark.read.format("csv").option("header","true")\
.option("inferSchema","true").load("/data/Combined_Cycle_Power_Plant.csv")
df.show()
df.cache()
Explanation: Load Data
End of explanation
df.limit(10).toPandas().head()
Explanation: Convert Spark Dataframe to Pandas Dataframe
End of explanation
from pyspark.ml.feature import *
vectorizer = VectorAssembler()
vectorizer.setInputCols(["AT", "V", "AP", "RH"])
vectorizer.setOutputCol("features")
df_vect = vectorizer.transform(df)
df_vect.show(10, False)
print(vectorizer.explainParams())
Explanation: Verctorize the features
End of explanation
from pyspark.ml.regression import LinearRegression
lr = LinearRegression()
print(lr.explainParams())
lr.setLabelCol("EP")
lr.setFeaturesCol("features")
model = lr.fit(df_vect)
type(model)
Explanation: Fit Linear Regression Model
End of explanation
print("R2:", model.summary.r2)
print("Intercept: ", model.intercept, "Coefficients", model.coefficients)
Explanation: View model summary
End of explanation
df_pred = model.transform(df_vect)
df_pred.show()
Explanation: Predict
End of explanation
from pyspark.ml.evaluation import RegressionEvaluator
evaluator = RegressionEvaluator()
print(evaluator.explainParams())
evaluator = RegressionEvaluator(labelCol = "EP",
predictionCol = "prediction",
metricName = "rmse")
evaluator.evaluate(df_pred)
Explanation: Evaluate
End of explanation
from pyspark.ml.pipeline import Pipeline, PipelineModel
pipeline = Pipeline()
print(pipeline.explainParams())
pipeline.setStages([vectorizer, lr])
pipelineModel = pipeline.fit(df)
pipeline.getStages()
lr_model = pipelineModel.stages[1]
lr_model .coefficients
pipelineModel.transform(df).show()
evaluator.evaluate(pipelineModel.transform(df))
Explanation: Build a pipeline
End of explanation
pipelineModel.save("/tmp/lr-pipeline")
!tree /tmp/lr-pipeline
Explanation: Save the pipeline to disk to persist the model
End of explanation
saved_model = PipelineModel.load("/tmp/lr-pipeline")
saved_model.stages[1].coefficients
saved_model.transform(df).show()
df_train, df_test = df.randomSplit(weights=[0.7, 0.3], seed = 200)
pipelineModel = pipeline.fit(df_train)
evaluator.evaluate(pipelineModel.transform(df_test))
Explanation: Load the persisted model from the disk
End of explanation
from pyspark.ml.tuning import ParamGridBuilder, TrainValidationSplit
paramGrid = ParamGridBuilder()\
.addGrid(lr.regParam, [0.1, 0.01]) \
.addGrid(lr.fitIntercept, [False, True])\
.addGrid(lr.elasticNetParam, [0.0, 0.5, 1.0])\
.build()
# In this case the estimator is simply the linear regression.
# A TrainValidationSplit requires an Estimator, a set of Estimator ParamMaps, and an Evaluator.
tvs = TrainValidationSplit(estimator=lr,
estimatorParamMaps=paramGrid,
evaluator=evaluator,
trainRatio=0.8)
tuned_model = tvs.fit(vectorizer.transform(df_train))
tuned_model.bestModel, tuned_model.validationMetrics
df_test_pred = tuned_model.transform(vectorizer.transform(df_test))
df_test_pred.show()
evaluator.evaluate(df_test_pred)
Explanation: Tune the model
End of explanation |
8,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow Recommenders
Step2: Read the data
Step3: Build vocabularies to convert user ids and movie titles into integer indices for embedding layers
Step4: Define a model
We can define a TFRS model by inheriting from tfrs.Model and implementing the compute_loss method
Step5: Define the two models and the retrieval task.
Step6: Fit and evaluate it.
Create the model, train it, and generate predictions | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q tensorflow-recommenders
!pip install -q --upgrade tensorflow-datasets
from typing import Dict, Text
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
import tensorflow_recommenders as tfrs
Explanation: TensorFlow Recommenders: Quickstart
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/recommenders/quickstart"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/recommenders/blob/main/docs/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/recommenders/blob/main/docs/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/recommenders/docs/examples/quickstart.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this tutorial, we build a simple matrix factorization model using the MovieLens 100K dataset with TFRS. We can use this model to recommend movies for a given user.
Import TFRS
First, install and import TFRS:
End of explanation
# Ratings data.
ratings = tfds.load('movielens/100k-ratings', split="train")
# Features of all the available movies.
movies = tfds.load('movielens/100k-movies', split="train")
# Select the basic features.
ratings = ratings.map(lambda x: {
"movie_title": x["movie_title"],
"user_id": x["user_id"]
})
movies = movies.map(lambda x: x["movie_title"])
Explanation: Read the data
End of explanation
user_ids_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
user_ids_vocabulary.adapt(ratings.map(lambda x: x["user_id"]))
movie_titles_vocabulary = tf.keras.layers.StringLookup(mask_token=None)
movie_titles_vocabulary.adapt(movies)
Explanation: Build vocabularies to convert user ids and movie titles into integer indices for embedding layers:
End of explanation
class MovieLensModel(tfrs.Model):
# We derive from a custom base class to help reduce boilerplate. Under the hood,
# these are still plain Keras Models.
def __init__(
self,
user_model: tf.keras.Model,
movie_model: tf.keras.Model,
task: tfrs.tasks.Retrieval):
super().__init__()
# Set up user and movie representations.
self.user_model = user_model
self.movie_model = movie_model
# Set up a retrieval task.
self.task = task
def compute_loss(self, features: Dict[Text, tf.Tensor], training=False) -> tf.Tensor:
# Define how the loss is computed.
user_embeddings = self.user_model(features["user_id"])
movie_embeddings = self.movie_model(features["movie_title"])
return self.task(user_embeddings, movie_embeddings)
Explanation: Define a model
We can define a TFRS model by inheriting from tfrs.Model and implementing the compute_loss method:
End of explanation
# Define user and movie models.
user_model = tf.keras.Sequential([
user_ids_vocabulary,
tf.keras.layers.Embedding(user_ids_vocabulary.vocab_size(), 64)
])
movie_model = tf.keras.Sequential([
movie_titles_vocabulary,
tf.keras.layers.Embedding(movie_titles_vocabulary.vocab_size(), 64)
])
# Define your objectives.
task = tfrs.tasks.Retrieval(metrics=tfrs.metrics.FactorizedTopK(
movies.batch(128).map(movie_model)
)
)
Explanation: Define the two models and the retrieval task.
End of explanation
# Create a retrieval model.
model = MovieLensModel(user_model, movie_model, task)
model.compile(optimizer=tf.keras.optimizers.Adagrad(0.5))
# Train for 3 epochs.
model.fit(ratings.batch(4096), epochs=3)
# Use brute-force search to set up retrieval using the trained representations.
index = tfrs.layers.factorized_top_k.BruteForce(model.user_model)
index.index_from_dataset(
movies.batch(100).map(lambda title: (title, model.movie_model(title))))
# Get some recommendations.
_, titles = index(np.array(["42"]))
print(f"Top 3 recommendations for user 42: {titles[0, :3]}")
Explanation: Fit and evaluate it.
Create the model, train it, and generate predictions:
End of explanation |
8,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Control Structures
A control statement is a statement that determines the control flow of a set of instructions.
Sequence control is an implicit form of control in which instructions are executed in the order that they are written.
Selection control is provided by a control statement that selectively executes instructions.
Iterative control is provided by an iterative control statement that repeatedly executes instructions.
Boolean Expressions
Boolean is a specific data type consists of True and False in Python.
Boolean expression is an expression that evaluates to a Boolean value.
One way of producing Boolean values is by comparing
Relational expressions are a type of Boolean expression, since they evaluate to a Boolean result.
Step1: We know that we can compare number for sure, but Python also let's us compare String values based on their character encoding.
Step2: Another way to get Boolean values is by checking if the membership of given value is valid or not
Step3: Boolean (logical) operators , denoted by and, or, and not in Python. It is basically logic,
Step4: The boolean operators will give us a more complex comparison statements which eventually will lead us to better control structures.
Step5: Selection Control
A selection control statement is a control statement providing selective execution of instructions.
An if statement is a selection control statement based on the value of a given Boolean expression.
Syntax
Step6: Apply it!
<p style=color
Step7: However there is a better way to do this using an additional keyword
Step8: Apply It!
<p style=color
Step9: As long as the condition of a while statement is true, the statements within the loop are (re)executed. | Python Code:
num = 10 # Assignment Operator
num == 12 # Comparison operator
Explanation: Control Structures
A control statement is a statement that determines the control flow of a set of instructions.
Sequence control is an implicit form of control in which instructions are executed in the order that they are written.
Selection control is provided by a control statement that selectively executes instructions.
Iterative control is provided by an iterative control statement that repeatedly executes instructions.
Boolean Expressions
Boolean is a specific data type consists of True and False in Python.
Boolean expression is an expression that evaluates to a Boolean value.
One way of producing Boolean values is by comparing
Relational expressions are a type of Boolean expression, since they evaluate to a Boolean result.
End of explanation
10 == 20
print(type('2'))
print('2' < '9')
if "Aliya" > "Alican":
print("Aliya is the best!")
else:
print("No, Aliya is not the best!")
'Hello' == "hello"
'Hello' > 'Zebra'
Explanation: We know that we can compare number for sure, but Python also let's us compare String values based on their character encoding.
End of explanation
'Dr.' in 'Dr. Madison'
10 not in (10, 20, 30)
Explanation: Another way to get Boolean values is by checking if the membership of given value is valid or not:
End of explanation
p = False
r = True
p and r
p or r
not (r and (not p))
Explanation: Boolean (logical) operators , denoted by and, or, and not in Python. It is basically logic,
End of explanation
num = 15
(1 <= num <= 10)
# Above is equals to
1 <= num and num <= 10
(10 < 0) and (10 > 2)
not(True) and False
not(True and False)
name = 'Ann'
name in ('Thomas', 'MaryAnn', 'Thomas')
type(('MarryAnn'))
Explanation: The boolean operators will give us a more complex comparison statements which eventually will lead us to better control structures.
End of explanation
if 10 < 0:
print("Yes")
grade = 66
if grade >= 70:
print('Passing Grade')
else:
print('Failing Grade')
grade = 100
if grade == 100:
print('Perfect Score!')
Explanation: Selection Control
A selection control statement is a control statement providing selective execution of instructions.
An if statement is a selection control statement based on the value of a given Boolean expression.
Syntax:
if condition:
statements
else:
statements
You don't have to include else part.
End of explanation
credits = 45
if credits >= 90:
print('Senior')
else:
if credits >= 60:
print('Junior')
else:
if credits >= 30:
print('Sophomore')
else:
if credits >= 1:
print('Freshman')
else:
print('* No Earned Credits *')
Explanation: Apply it!
<p style=color:red>
Write a small program that converts Fahrenheit to Celcius or vice-verse by getting input from user (F/C)
</p>
Indentation is really important in Python since it does not use {} or ;
Multiway selection is possible by nested if else statements:
End of explanation
credits = 45
if credits >= 90:
print('Senior')
elif credits >= 60:
print('Junior')
elif credits >= 30:
print('Sophomore')
elif credits >= 1:
print('Freshman')
else:
print('* No Earned Credits *')
Explanation: However there is a better way to do this using an additional keyword: elif
End of explanation
# Initial variables
total = 0
i = 1
n = int(input('Enter value: '))
while i <= n:
total += i # total = total + i
i += 1
print(total)
Explanation: Apply It!
<p style=color:red>
Write a small program that prints the day of the specific month of a year. The output will look like this:
</p>
Test 1:
This program will determine the number of days in a given month
Enter the month (1-12): 14
*Invalid Value Entered -14*
Test 2:
This program will determine the number of days in a given month
Enter the month (1-12): 2
Please enter the year (e.g., 2010): 2000
There are 29 days in the month
<p style=color:red>
Use if and elif statements
</p>
Hint1:
<p style=color:white>
The days of the month are fixed regardless of the year, except February. <br>
Check for 2.
</p>
Hint2:
<p style=color:white>
If the year is divisible by 4 but is also divisible by 100, then it is not a leap year— unless, it is also divisible by 400, then it is.
</p>
Hint3:
<p style=color:white>
(year % 4 == 0) and (not (year % 100 == 0) or (year % 400 == 0))
</p>
Iterative Control
An iterative control statement is a control statement providing the repeated execution of a set of instructions.
Because of the repeated execution, iterative control structures are commonly referred to as “loops” and that's how I am going to name them :)
A while statement is an iterative control statement that repeatedly executes a set of statements based on a provided Boolean expression (condition).
Syntax:
while condition:
statement
End of explanation
import time
n = 10
tot = 0
i = 1
while i <= n:
tot = tot + i
i = i + 1
print(tot)
time.sleep(2)
n = 100
tot = 0
while True:
tot = tot + n
n = n - 1
if n == 0:
break
print(tot)
Explanation: As long as the condition of a while statement is true, the statements within the loop are (re)executed.
End of explanation |
8,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forecasting Growth
By default, Prophet uses a linear model for its forecast. When forecasting growth, there is usually some maximum achievable point
Step1: We must specify the carrying capacity in a column cap. Here we will assume a particular value, but this would usually be set using data or expertise about the market size.
Step2: The important things to note are that cap must be specified for every row in the dataframe, and that it does not have to be constant. If the market size is growing, then cap can be an increasing sequence.
We then fit the model as before, except pass in an additional argument to specify logistic growth
Step3: We make a dataframe for future predictions as before, except we must also specify the capacity in the future. Here we keep capacity constant at the same value as in the history, and forecast 5 years into the future
Step4: The logistic function has an implicit minimum of 0, and will saturate at 0 the same way that it saturates at the capacity. It is possible to also specify a different saturating minimum.
Saturating Minimum
The logistic growth model can also handle a saturating minimum, which is specified with a column floor in the same way as the cap column specifies the maximum | Python Code:
%%R
df <- read.csv('../examples/example_wp_log_R.csv')
df = pd.read_csv('../examples/example_wp_log_R.csv')
Explanation: Forecasting Growth
By default, Prophet uses a linear model for its forecast. When forecasting growth, there is usually some maximum achievable point: total market size, total population size, etc. This is called the carrying capacity, and the forecast should saturate at this point.
Prophet allows you to make forecasts using a logistic growth trend model, with a specified carrying capacity. We illustrate this with the log number of page visits to the R (programming language) page on Wikipedia:
End of explanation
%%R
df$cap <- 8.5
df['cap'] = 8.5
Explanation: We must specify the carrying capacity in a column cap. Here we will assume a particular value, but this would usually be set using data or expertise about the market size.
End of explanation
%%R
m <- prophet(df, growth = 'logistic')
m = Prophet(growth='logistic')
m.fit(df)
Explanation: The important things to note are that cap must be specified for every row in the dataframe, and that it does not have to be constant. If the market size is growing, then cap can be an increasing sequence.
We then fit the model as before, except pass in an additional argument to specify logistic growth:
End of explanation
%%R -w 10 -h 6 -u in
future <- make_future_dataframe(m, periods = 1826)
future$cap <- 8.5
fcst <- predict(m, future)
plot(m, fcst)
future = m.make_future_dataframe(periods=1826)
future['cap'] = 8.5
fcst = m.predict(future)
fig = m.plot(fcst)
Explanation: We make a dataframe for future predictions as before, except we must also specify the capacity in the future. Here we keep capacity constant at the same value as in the history, and forecast 5 years into the future:
End of explanation
%%R -w 10 -h 6 -u in
df$y <- 10 - df$y
df$cap <- 6
df$floor <- 1.5
future$cap <- 6
future$floor <- 1.5
m <- prophet(df, growth = 'logistic')
fcst <- predict(m, future)
plot(m, fcst)
df['y'] = 10 - df['y']
df['cap'] = 6
df['floor'] = 1.5
future['cap'] = 6
future['floor'] = 1.5
m = Prophet(growth='logistic')
m.fit(df)
fcst = m.predict(future)
fig = m.plot(fcst)
Explanation: The logistic function has an implicit minimum of 0, and will saturate at 0 the same way that it saturates at the capacity. It is possible to also specify a different saturating minimum.
Saturating Minimum
The logistic growth model can also handle a saturating minimum, which is specified with a column floor in the same way as the cap column specifies the maximum:
End of explanation |
8,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Class 6
Step1: Creating a new DataFrame
Now we'll use our cross-country income per capita data to create a new DataFrame containing growth data.
Step2: Let $y_t$ denotes income per capita for some country in some year $t$ and let $g$ denotes the average annual growth in income per capita between years 0 and $T$. $g$ is defined by
Step3: Exporting a DataFrame to csv
Use the DataFrame method to_csv(). | Python Code:
# Use the requests module to download cross country GDP per capita
url = ''
filename=''
r = requests.get(url,verify=True)
with open(filename,'wb') as newFile:
newFile.write(r.content)
# Import the cross-country GDP data into a DataFrame called incomeDf with index_col=0
# Print the first five rows of incomeDf
# Print the columns of incomeDf
# Print the number of countries represented in incomeDf
# Print the index of incomeDf
# Print the number of years of data in incomeDf
# Print the first five rows of the 'United States - USA' column of incomeDf
# Print the last five rows of the 'United States - USA' column of incomeDf
# Create a plot of income per capita from 1960 to 2011 for the US
# Create a plot of income per capita from 1960 to 2011 for another country in the dataset
# Create a new variable called income60 equal to the 1960 row from incomeDf
# Print the index of income60
# Print the average world income per capita in 1960
# Print the standard deviation in world income per capita in 1960
# Print the names of the five countries with the highest five incomes per capita in 1960
# Print the names of the five countries with the lowest five incomes per capita in 1960
# Create a new variable called income11 equal to the 2011 row from incomeDf
# Print the average world income per capita in 2011
# Print the standard deviation in world income per capita in 2011
# Print the names of the five countries with the highest five incomes per capita in 2011
# Print the names of the five countries with the lowest five incomes per capita in 2011
Explanation: Class 6: More Pandas
Objectives:
Analize some cross-country GDP per capita data
Create a new DataFrame
Export a DataFrame to a csv file
Exercise: Cross-country income per capita statistics
Download a file called corssCountryIncomePerCapita.csv by visiting http://www.briancjenkins.com/data/international/ and following the link for: "GDP per capita (constant US 2005 PPP $, levels)"
End of explanation
# Create a DataFrame called growthDf with columns 'income 1960' and 'income 2011' equal to income per capita
# in 1960 and 2011 and an index equal to the index of income60
# Create a new column equal to the difference between 'income 2011' and 'income 1960' for each country
Explanation: Creating a new DataFrame
Now we'll use our cross-country income per capita data to create a new DataFrame containing growth data.
End of explanation
# Create a new column equal to the average annual growth rate between for each country between 1960 and 2011
# Print the first five rows of growthDf
# Print the names of the five countries with the highest average annual growth rates
# Print the names of the five countries with the lowest average annual growth rates
# Print the average annual growth rate of income per capita from 1960 to 2011
# Print the standard deviation of the annual growth rate of income per capita from 1960 to 2011
# Construct a scatter plot:
# Use the plt.scatter function
# income per capita in 1960 on the horizontal axis and average annual growth rate on the vertical axis
# Set the opacity of the points to something like 0.25 - 0.35
# Label the plot clearly with axis labels and a title
Explanation: Let $y_t$ denotes income per capita for some country in some year $t$ and let $g$ denotes the average annual growth in income per capita between years 0 and $T$. $g$ is defined by:
\begin{align}
y_T & = (1+g)^T y_0
\end{align}
which implies:
\begin{align}
g & = \left(\frac{y_T}{y_0}\right)^{1/T} - 1
\end{align}
Note that since our data are from 1960 to 2011, $T = 51$. Which is also equal to len(incomeDf.index)-1.
End of explanation
# Export the growthDf DataFrame to a csv file called 'growth_data.csv'
Explanation: Exporting a DataFrame to csv
Use the DataFrame method to_csv().
End of explanation |
8,878 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
Step5: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0
Step6: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape
Step7: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
Step8: Finally we will pick the interaction effect by passing 'A
Step9: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions
Step10: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal).
Step11: Visualize the clusters
Step12: Finally, let's investigate interaction effect by reconstructing the time
courses | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
# Eric Larson <[email protected]>
# Denis Engemannn <[email protected]>
#
# License: BSD-3-Clause
import os.path as op
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
import mne
from mne.stats import (spatio_temporal_cluster_test, f_threshold_mway_rm,
f_mway_rm, summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: Repeated measures ANOVA on source data with spatio-temporal clustering
This example illustrates how to make use of the clustering functions
for arbitrary, self-defined contrasts beyond standard t-tests. In this
case we will tests if the differences in evoked responses between
stimulation modality (visual VS auditory) depend on the stimulus
location (left vs right) for a group of subjects (simulated here
using one subject's data). For this purpose we will compute an
interaction effect using a repeated measures ANOVA. The multiple
comparisons problem is addressed with a cluster-level permutation test
across space and time.
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path / 'subjects'
src_fname = subjects_dir / 'fsaverage' / 'bem' / 'fsaverage-ico-5-src.fif'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
# we'll load all four conditions that make up the 'two ways' of our ANOVA
event_id = dict(l_aud=1, r_aud=2, l_vis=3, r_vis=4)
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
epochs.equalize_event_counts(event_id)
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE, sLORETA, or eLORETA)
inverse_operator = read_inverse_operator(fname_inv)
# we'll only use one hemisphere to speed up this example
# instead of a second vertex array we'll pass an empty array
sample_vertices = [inverse_operator['src'][0]['vertno'], np.array([], int)]
# Let's average and compute inverse, then resample to speed things up
conditions = []
for cond in ['l_aud', 'r_aud', 'l_vis', 'r_vis']: # order is important
evoked = epochs[cond].average()
evoked.resample(30).crop(0., None)
condition = apply_inverse(evoked, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition.crop(0, None)
conditions.append(condition)
tmin = conditions[0].tmin
tstep = conditions[0].tstep * 1000 # convert to milliseconds
Explanation: Transform to source space
End of explanation
n_vertices_sample, n_times = conditions[0].lh_data.shape
n_subjects = 6
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 4) * 10
for ii, condition in enumerate(conditions):
X[:, :, :, ii] += condition.lh_data[:, :, np.newaxis]
Explanation: Transform to common cortical space
Normally you would read in estimates across several subjects and morph them
to the same cortical space (e.g. fsaverage). For example purposes, we will
simulate this by just having each "subject" have the same response (just
noisy in source space) here.
We'll only consider the left hemisphere in this tutorial.
End of explanation
# Read the source space we are morphing to (just left hemisphere)
src = mne.read_source_spaces(src_fname)
fsave_vertices = [src[0]['vertno'], []]
morph_mat = mne.compute_source_morph(
src=inverse_operator['src'], subject_to='fsaverage',
spacing=fsave_vertices, subjects_dir=subjects_dir, smooth=20).morph_mat
morph_mat = morph_mat[:, :n_vertices_sample] # just left hemi from src
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 4)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 4)
Explanation: It's a good idea to spatially smooth the data, and for visualization
purposes, let's morph these to fsaverage, which is a grade 5 ICO source space
with vertices 0:10242 for each hemisphere. Usually you'd have to morph
each subject's data separately, but here since all estimates are on
'sample' we can use one morph matrix for all the heavy lifting.
End of explanation
X = np.transpose(X, [2, 1, 0, 3]) #
X = [np.squeeze(x) for x in np.split(X, 4, axis=-1)]
Explanation: Now we need to prepare the group matrix for the ANOVA statistic. To make the
clustering function work correctly with the ANOVA function X needs to be a
list of multi-dimensional arrays (one per condition) of shape: samples
(subjects) × time × space.
First we permute dimensions, then split the array into a list of conditions
and discard the empty dimension resulting from the split using numpy squeeze.
End of explanation
factor_levels = [2, 2]
Explanation: Prepare function for arbitrary contrast
As our ANOVA function is a multi-purpose tool we need to apply a few
modifications to integrate it with the clustering function. This
includes reshaping data, setting default arguments and processing
the return values. For this reason we'll write a tiny dummy function.
We will tell the ANOVA how to interpret the data matrix in terms of
factors. This is done via the factor levels argument which is a list
of the number factor levels for each factor.
End of explanation
effects = 'A:B'
# Tell the ANOVA not to compute p-values which we don't need for clustering
return_pvals = False
# a few more convenient bindings
n_times = X[0].shape[1]
n_conditions = 4
Explanation: Finally we will pick the interaction effect by passing 'A:B'.
(this notation is borrowed from the R formula language).
As an aside, note that in this particular example, we cannot use the A*B
notation which return both the main and the interaction effect. The reason
is that the clustering function expects stat_fun to return a 1-D array.
To get clusters for both, you must create a loop.
End of explanation
def stat_fun(*args):
# get f-values only.
return f_mway_rm(np.swapaxes(args, 1, 0), factor_levels=factor_levels,
effects=effects, return_pvals=return_pvals)[0]
Explanation: A stat_fun must deal with a variable number of input arguments.
Inside the clustering function each condition will be passed as flattened
array, necessitated by the clustering procedure. The ANOVA however expects an
input array of dimensions: subjects × conditions × observations (optional).
The following function catches the list input and swaps the first and the
second dimension, and finally calls ANOVA.
<div class="alert alert-info"><h4>Note</h4><p>For further details on this ANOVA function consider the
corresponding
`time-frequency tutorial <tut-timefreq-twoway-anova>`.</p></div>
End of explanation
# as we only have one hemisphere we need only need half the adjacency
print('Computing adjacency.')
adjacency = mne.spatial_src_adjacency(src[:1])
# Now let's actually do the clustering. Please relax, on a small
# notebook and one single thread only this will take a couple of minutes ...
pthresh = 0.005
f_thresh = f_threshold_mway_rm(n_subjects, factor_levels, effects, pthresh)
# To speed things up a bit we will ...
n_permutations = 50 # ... run way fewer permutations (reduces sensitivity)
print('Clustering.')
F_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_test(X, adjacency=adjacency, n_jobs=None,
threshold=f_thresh, stat_fun=stat_fun,
n_permutations=n_permutations,
buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute clustering statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal).
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# The brighter the color, the stronger the interaction between
# stimulus modality and stimulus location
brain = stc_all_cluster_vis.plot(subjects_dir=subjects_dir, views='lat',
time_label='temporal extent (ms)',
clim=dict(kind='value', lims=[0, 1, 40]))
brain.save_image('cluster-lh.png')
brain.show_view('medial')
Explanation: Visualize the clusters
End of explanation
inds_t, inds_v = [(clusters[cluster_ind]) for ii, cluster_ind in
enumerate(good_cluster_inds)][0] # first cluster
times = np.arange(X[0].shape[1]) * tstep * 1e3
plt.figure()
colors = ['y', 'b', 'g', 'purple']
event_ids = ['l_aud', 'r_aud', 'l_vis', 'r_vis']
for ii, (condition, color, eve_id) in enumerate(zip(X, colors, event_ids)):
# extract time course at cluster vertices
condition = condition[:, :, inds_v]
# normally we would normalize values across subjects but
# here we use data from the same subject so we're good to just
# create average time series across subjects and vertices.
mean_tc = condition.mean(axis=2).mean(axis=0)
std_tc = condition.std(axis=2).std(axis=0)
plt.plot(times, mean_tc.T, color=color, label=eve_id)
plt.fill_between(times, mean_tc + std_tc, mean_tc - std_tc, color='gray',
alpha=0.5, label='')
ymin, ymax = mean_tc.min() - 5, mean_tc.max() + 5
plt.xlabel('Time (ms)')
plt.ylabel('Activation (F-values)')
plt.xlim(times[[0, -1]])
plt.ylim(ymin, ymax)
plt.fill_betweenx((ymin, ymax), times[inds_t[0]],
times[inds_t[-1]], color='orange', alpha=0.3)
plt.legend()
plt.title('Interaction between stimulus-modality and location.')
plt.show()
Explanation: Finally, let's investigate interaction effect by reconstructing the time
courses:
End of explanation |
8,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Chapter 3 - Sampling the Imaginary
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Introduction
We are interested in a blood test that correctly detects vampirisim 95% of time
Step3: This result shows that there is only 8.7% chance that suspect is vampire even if the test is positive, because of the low incidence rate (prior probability).
3.1 Sampling from a grid-approximate posterior
Code 3.2
Let's compute the posterior for the globe tossing model, the probability of p conditional on the data
Step4: We now wish to draw 10000 samples from the posterior
Step5: Now let's display the resulting samples
Step6: As the plot shows, there are many samples in the 0.6 region and very few below 0.25.
Here's the density estimate of these samples, which is very similar to the ideal posterior you computed via grid approximation in Chapter 2, section 2.4.3.
Code 3.5
Step7: 3.2 Sampling to summarize
Once we have the posterior distribution the next step is to summarize it. Some of the common tasks are
Step8: This means that about 17% of posterior probability is below 0.5.
Now let's find the frequency of parameter values below 0.5
Step9: This is nearly the same answer as that of grid approximation, in Code 3.6,
Using the same approach, how much posterior probability lies between 0.5 and 0.75?
Code 3.8
Step10: 3.2.2 Intervals of defined mass
Scientific journals will commonly report on an "interval of defined mass" also known as a Confidence Interval. The text will use the term Compatiblity Interval instead, since the interval indicates a range of parameter values compatible with the model and data.
If we want to know what interval of parameter values contains 80% of the probability mass, we can simply examine the samples from the posterior, with an interval starting at p=0 until we have reached the 80th percentile.
Code 3.9
Step11: Similarly, the middle 80% interval lies between the 10th percentile and the 90th percentile
Step12: The text refers to intervals like this, which assign equal probability mass to each tail, as Percentile Intervals. They can be useful to characterize a distribution if it is fairly symmetrical.
By contrast, consider a highly skewed distribution with a maximum value at p=1, like the posterior for observing three waters in three tosses with a uniform prior. We can compute this posterior using grid approximation
Step13: Let's compute a 50% percentile compatibility interval that provides the central 50% probability by assigning 25% of the probability below the interval, and 25% above it
Step14: Given the assymetric shape of the distribution, this interval is misleading because it fails to contain the most probable parameter values at p=1.
The Highest Posterior Density Interval (HPDI) is the narrowest interval containing the specified probability mass, which always contains the most probable parameter value.
We can use arviz to compute it
Step15: 3.2.3 Point Estimates
How can we use the posterior distribution to create a point estimate to summarize the distribution for getting three waters from three tosses? Let's compare three different types of point estimates.
Code 3.14
The maximum a posteriori (MAP) estimate is the parameter value with the high posterior probability.
Step16: With our samples from the posterior, we can approximate the MAP
Step17: We can also compute the posterior mean and median.
Code 3.16
Step18: We can use a loss function to compute the cost of using any specific point estimate. Suppose our loss function is proportional to the difference between our decision (e.g., our point estimate) and the true value of the parameter.
If we chose p=0.5 as our decision for the parameter value, we can use the posterior distribution to compute the expected loss, by computing the weighted average loss
Step19: We can repeat this loss computation for every possible decision
Step20: Code 3.19
Now, we can find the parameter value that minimizes the loss.
Step21: It turns out that the value that minimizes our loss will be the same as that for the posterior median, which splits the posterior density so that half the mass is above it, with half below it.
3.3 Sampling to simulate prediction
We can use our Bayesian models to produce simulated observations, since all Bayesian models are generative.
3.3.1 Dummy data
We will call such simulated data dummy data, to indicate that it is a stand-in for actual data.
Recall from the globe tossing model that the probability of observing W counts of water in N tosses with proportion of water p is given by the binomial likelihood
Step22: This means that there’s a 9% chance of observing w = 0, a 42% chance of w = 1, and a 49% chance of w = 2.
We can simulate a dummy observation of W from our model by sampling from the binomial distribution
Step23: The result is the number of water observations in 2 tosses of the globe.
A set of 10 simulations can be made by
Step24: When we generate 100,000 dummy observations, note that each value of w appears in proportion to its likelihood
Code 3.23
Step25: Only two tosses of the globe isn’t much of a sample, though. So now let’s simulate the same sample size as before, 9 tosses.
Code 3.24
Step26: 3.3.2. Model Checking
We’d like to propagate the parameter uncertainty—carry it forward—as we evaluate the implied predictions. All that is required is averaging over the posterior density for p, while computing the predictions. For each possible value of the parameter p, there is an implied distribution of outcomes. So if you were to compute the sampling distribution of outcomes at each value of p, then you could average all of these prediction distributions together, using the posterior probabilities of each value of p, to get a POSTERIOR PREDICTIVE DISTRIBUTION.
To simulate predicted observations for nine globe tosses, for a single value of p=0.6, we can use Binomial to generate random binomial samples
Step27: All you need to propagate parameter uncertainty into these predictions is replace the value 0.6 with samples from the posterior.... Since the sampled values appear in proportion to their posterior probabilities, the resulting simulated observations are averaged over the posterior.
Code 3.26 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
#@title Install { display-mode: "form" }
TF_Installation = 'System' #@param ['TF Nightly', 'TF Stable', 'System']
if TF_Installation == 'TF Nightly':
!pip install -q --upgrade tf-nightly
print('Installation of `tf-nightly` complete.')
elif TF_Installation == 'TF Stable':
!pip install -q --upgrade tensorflow
print('Installation of `tensorflow` complete.')
elif TF_Installation == 'System':
pass
else:
raise ValueError('Selection Error: Please select a valid '
'installation option.')
#@title Install { display-mode: "form" }
TFP_Installation = "System" #@param ["Nightly", "Stable", "System"]
if TFP_Installation == "Nightly":
!pip install -q tfp-nightly
print("Installation of `tfp-nightly` complete.")
elif TFP_Installation == "Stable":
!pip install -q --upgrade tensorflow-probability
print("Installation of `tensorflow-probability` complete.")
elif TFP_Installation == "System":
pass
else:
raise ValueError("Selection Error: Please select a valid "
"installation option.")
#@title Install { display-mode: "form" }
# Install packages that are not installed in colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
if IN_COLAB:
import sys
if sys.executable: # make sure pip is available
print("Installing arviz ...")
!{sys.executable} -m pip install -q arviz
# Core
import numpy as np
import arviz as az
import pandas as pd
import tensorflow as tf
import tensorflow_probability as tfp
import scipy.stats as stats
# visualization
import matplotlib.pyplot as plt
# aliases
tfd = tfp.distributions
az.style.use('seaborn-colorblind')
Explanation: Chapter 3 - Sampling the Imaginary
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/statistical_rethinking/notebooks/03_sampling_the_imaginary"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/statistical_rethinking/notebooks/03_sampling_the_imaginary.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/statistical_rethinking/notebooks/03_sampling_the_imaginary.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/examples/statistical_rethinking/notebooks/03_sampling_the_imaginary.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Imports and utility functions
End of explanation
Pr_Positive_Vampire = 0.95
Pr_Positive_Mortal = 0.01
Pr_Vampire = 0.001
tmp = Pr_Positive_Vampire * Pr_Vampire
Pr_Positive = tmp + Pr_Positive_Mortal * (1 - Pr_Vampire)
Pr_Vampire_Positive = tmp / Pr_Positive
Pr_Vampire_Positive
Explanation: Introduction
We are interested in a blood test that correctly detects vampirisim 95% of time:
$$\operatorname{Pr}(\text{Positive}|\text{Vampire}) = 0.95$$
The test has a false positive rate of:
$$\operatorname{Pr}(\text{Positive}|\text{Mortal}) = 0.01$$
We also know that vampires are rare--about 0.1% of population:
$$\operatorname{Pr}(\text{Vampire}) = 0.001$$
To compute $\operatorname{Pr}(\text{Vampire}|\text{Positive})$ we will apply Bayes' rule:
$$
\operatorname{Pr}(\text{Vampire}|\text{Positive}) = \frac{\operatorname{Pr}(\text{Positive}|\text{Vampire}) \operatorname{Pr}(\text{Vampire})}{\operatorname{Pr}(\text{Positive})}
$$
Code 3.1
End of explanation
p_grid = tf.linspace(start=0., stop=1., num=1000)
prob_p = tf.ones(1000)
prob_data = tfd.Binomial(total_count=9, probs=p_grid).prob(6)
joint_prob = prob_data * prob_p
posterior = joint_prob / tf.reduce_sum(joint_prob)
Explanation: This result shows that there is only 8.7% chance that suspect is vampire even if the test is positive, because of the low incidence rate (prior probability).
3.1 Sampling from a grid-approximate posterior
Code 3.2
Let's compute the posterior for the globe tossing model, the probability of p conditional on the data:
End of explanation
samples = tfd.Categorical(probs=posterior).sample(10_000)
Explanation: We now wish to draw 10000 samples from the posterior :
Code 3.3
End of explanation
sample_rows = p_grid.numpy()[samples]
_, ax = plt.subplots()
ax.scatter(range(len(sample_rows)), sample_rows, alpha=0.2)
ax.set(ylabel='proportion water (p)',
xlabel='sample number',
title='Figure 3.1 (left panel)',
ylim=(0, 1));
Explanation: Now let's display the resulting samples:
Code 3.4
End of explanation
ax, = az.plot_density(sample_rows, credible_interval=1.)
ax.set(title="Figure 3.1 (right panel)",
xlabel="proportion water (p)",
ylabel="Density",
xlim=(0, 1));
Explanation: As the plot shows, there are many samples in the 0.6 region and very few below 0.25.
Here's the density estimate of these samples, which is very similar to the ideal posterior you computed via grid approximation in Chapter 2, section 2.4.3.
Code 3.5
End of explanation
tf.reduce_sum(posterior[p_grid < 0.5])
Explanation: 3.2 Sampling to summarize
Once we have the posterior distribution the next step is to summarize it. Some of the common tasks are:
How much posterior probability lies below some parameter value?
How much posterior probability lies between two parameter values?
Which parameter value marks the lower 5% of the posterior probability?
Which range of parameter values contains 90% of the posterior probability?
Which parameter value has highest posterior probability?
These simple questions can be usefully divided into questions about (1) intervals of defined boundaries, (2) questions about intervals of defined probability mass, and (3) questions about point estimates. We’ll see how to approach these questions using samples from the posterior.
3.2.1 Intervals of defined boundaries
What is the posterior probability that proportion of water is less than 0.5? We can simply sum all of the probabilities where the parameter value is less than 0.5.
Let's add up the posterior probability where p < 0.5:
Code 3.6
End of explanation
tf.where(sample_rows < 0.5).shape[0] / 10_000
Explanation: This means that about 17% of posterior probability is below 0.5.
Now let's find the frequency of parameter values below 0.5:
Code 3.7
End of explanation
condition = (sample_rows > 0.5) & (sample_rows < 0.75)
tf.reduce_sum(tf.cast(condition, float))/ 10_000
Explanation: This is nearly the same answer as that of grid approximation, in Code 3.6,
Using the same approach, how much posterior probability lies between 0.5 and 0.75?
Code 3.8
End of explanation
tfp.stats.percentile(sample_rows, q=80.)
Explanation: 3.2.2 Intervals of defined mass
Scientific journals will commonly report on an "interval of defined mass" also known as a Confidence Interval. The text will use the term Compatiblity Interval instead, since the interval indicates a range of parameter values compatible with the model and data.
If we want to know what interval of parameter values contains 80% of the probability mass, we can simply examine the samples from the posterior, with an interval starting at p=0 until we have reached the 80th percentile.
Code 3.9
End of explanation
tfp.stats.percentile(sample_rows, q=[10.,90.])
Explanation: Similarly, the middle 80% interval lies between the 10th percentile and the 90th percentile:
Code 3.10
End of explanation
p_grid = tf.linspace(start=0., stop=1., num=1000)
prior = tf.ones(1000)
likelihood = tfd.Binomial(total_count=3, probs=p_grid).prob(3)
joint_prob = likelihood * prior
posterior = joint_prob / tf.reduce_sum(joint_prob)
samples = tfd.Categorical(probs=posterior).sample(10_000)
sample_rows = p_grid.numpy()[samples]
Explanation: The text refers to intervals like this, which assign equal probability mass to each tail, as Percentile Intervals. They can be useful to characterize a distribution if it is fairly symmetrical.
By contrast, consider a highly skewed distribution with a maximum value at p=1, like the posterior for observing three waters in three tosses with a uniform prior. We can compute this posterior using grid approximation:
Code 3.11
End of explanation
tfp.stats.percentile(sample_rows, q=[25.,75.])
Explanation: Let's compute a 50% percentile compatibility interval that provides the central 50% probability by assigning 25% of the probability below the interval, and 25% above it:
Code 3.12
End of explanation
az.hpd(sample_rows, credible_interval=0.5)
Explanation: Given the assymetric shape of the distribution, this interval is misleading because it fails to contain the most probable parameter values at p=1.
The Highest Posterior Density Interval (HPDI) is the narrowest interval containing the specified probability mass, which always contains the most probable parameter value.
We can use arviz to compute it:
Code 3.13
End of explanation
p_grid[posterior == max(posterior)]
Explanation: 3.2.3 Point Estimates
How can we use the posterior distribution to create a point estimate to summarize the distribution for getting three waters from three tosses? Let's compare three different types of point estimates.
Code 3.14
The maximum a posteriori (MAP) estimate is the parameter value with the high posterior probability.
End of explanation
tf.reduce_max(sample_rows)
Explanation: With our samples from the posterior, we can approximate the MAP:
Code 3.15
End of explanation
median = tfp.stats.percentile(sample_rows, q=50.)
mean = tf.math.reduce_mean(sample_rows)
print(f"mean={mean:.6f}, median={median:.6f}")
Explanation: We can also compute the posterior mean and median.
Code 3.16
End of explanation
tf.reduce_sum(posterior * abs(0.5 - p_grid))
Explanation: We can use a loss function to compute the cost of using any specific point estimate. Suppose our loss function is proportional to the difference between our decision (e.g., our point estimate) and the true value of the parameter.
If we chose p=0.5 as our decision for the parameter value, we can use the posterior distribution to compute the expected loss, by computing the weighted average loss: We weight each loss by its corresponding posterior probability:
Code 3.17
End of explanation
loss = tf.map_fn(lambda d: tf.reduce_sum(posterior * np.abs(d - p_grid)), p_grid)
Explanation: We can repeat this loss computation for every possible decision:
Code 3.18
End of explanation
p_grid[tf.math.argmin(loss)]
Explanation: Code 3.19
Now, we can find the parameter value that minimizes the loss.
End of explanation
tfd.Binomial(total_count=2, probs=0.7).prob(np.arange(3))
Explanation: It turns out that the value that minimizes our loss will be the same as that for the posterior median, which splits the posterior density so that half the mass is above it, with half below it.
3.3 Sampling to simulate prediction
We can use our Bayesian models to produce simulated observations, since all Bayesian models are generative.
3.3.1 Dummy data
We will call such simulated data dummy data, to indicate that it is a stand-in for actual data.
Recall from the globe tossing model that the probability of observing W counts of water in N tosses with proportion of water p is given by the binomial likelihood:
$$
\operatorname{Pr}(W|N,p) = \frac{N!}{W!(N-W)!}p^W(1-p)^{N-W}
$$
For two tosses of the globe and proportion of water at 0.7 we can compute the probability of observing 0, 1, and 2 counts of water:
Code 3.20
End of explanation
tfd.Binomial(total_count=2, probs=0.7).sample()
Explanation: This means that there’s a 9% chance of observing w = 0, a 42% chance of w = 1, and a 49% chance of w = 2.
We can simulate a dummy observation of W from our model by sampling from the binomial distribution:
Code 3.21
End of explanation
tfd.Binomial(total_count=2, probs=0.7).sample(10)
Explanation: The result is the number of water observations in 2 tosses of the globe.
A set of 10 simulations can be made by:
Code 3.22
End of explanation
dummy_w = tfd.Binomial(total_count=2, probs=0.7).sample(100_000)
tf.unique_with_counts(tf.sort(dummy_w)).count / int(1e5)
Explanation: When we generate 100,000 dummy observations, note that each value of w appears in proportion to its likelihood
Code 3.23
End of explanation
dummy_w = tfd.Binomial(total_count=9, probs=0.7).sample((100000,))
_, ax = plt.subplots()
ax.hist(dummy_w.numpy(), bins=np.arange(11), rwidth=0.2)
ax.set(xlabel="dummy water count",
ylabel="Frequency");
Explanation: Only two tosses of the globe isn’t much of a sample, though. So now let’s simulate the same sample size as before, 9 tosses.
Code 3.24
End of explanation
w = tfd.Binomial(total_count=9, probs=0.6).sample(1e4)
Explanation: 3.3.2. Model Checking
We’d like to propagate the parameter uncertainty—carry it forward—as we evaluate the implied predictions. All that is required is averaging over the posterior density for p, while computing the predictions. For each possible value of the parameter p, there is an implied distribution of outcomes. So if you were to compute the sampling distribution of outcomes at each value of p, then you could average all of these prediction distributions together, using the posterior probabilities of each value of p, to get a POSTERIOR PREDICTIVE DISTRIBUTION.
To simulate predicted observations for nine globe tosses, for a single value of p=0.6, we can use Binomial to generate random binomial samples:
Code 3.25
End of explanation
w = tfd.Binomial(total_count=9, probs=sample_rows).sample()
Explanation: All you need to propagate parameter uncertainty into these predictions is replace the value 0.6 with samples from the posterior.... Since the sampled values appear in proportion to their posterior probabilities, the resulting simulated observations are averaged over the posterior.
Code 3.26
End of explanation |
8,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aula 9 - Interpolação Domínio da Frequência
Correção exercícios
isccsym
Solução não é trivial. Precisamos também verificar se a função funciona com entrada de imagem complexa.
Vamos refazer este exercício, fornecendo um conjunto de imagens de teste para todos verificarem se sua implementação funciona. O objetivo é conseguir uma implementação bem eficiente, utilizando fatiamento.
filtroidealdemo
convteo
Então sempre que existe um filtro utilizando a convolução periódic1, podemos implementá-lo usando a DFT e vice-versa, sempre que tivermos um filtro no domínio da frequência, podemos implementá-lo via convolução periódica.
Quando é melhor usar convolução
Step1: Exercícios para a próxima aula
isccsym
isccsym usando fatiamento e o conjunto de testes no arquivo pickle ccsym.pkl | Python Code:
# import cv2
Explanation: Aula 9 - Interpolação Domínio da Frequência
Correção exercícios
isccsym
Solução não é trivial. Precisamos também verificar se a função funciona com entrada de imagem complexa.
Vamos refazer este exercício, fornecendo um conjunto de imagens de teste para todos verificarem se sua implementação funciona. O objetivo é conseguir uma implementação bem eficiente, utilizando fatiamento.
filtroidealdemo
convteo
Então sempre que existe um filtro utilizando a convolução periódic1, podemos implementá-lo usando a DFT e vice-versa, sempre que tivermos um filtro no domínio da frequência, podemos implementá-lo via convolução periódica.
Quando é melhor usar convolução:
para máscaras com 4 a 10 elementos, é mais rápido executar pela convolução
Quando é melhor usa a DFT:
é mais rápido para mascaras espaciais maiores que 20 elementos.
é útil no projeto de vários filtros: ideal, filtro butterworth,
filtro sintonizado
é muito útil para entender o que um filtro por convolução está
operando
Propriedade da Escala (expansão)
Revisão da demonstração feita na aula anterior
Propriedade escala
Interpolação no domínio da frequência
Magnify
O redimensionamento com interpolação de uma imagem é uma operação custosa de processamento e de difícil implementação eficiente.
Algumas bibliotecas onde implementar este redimensionamento
scipy.misc.imresize (usa o PIL)
scipy.ndimage.zoom
skimage.transform.resize
opencv, cv2.resize
End of explanation
import pickle
try:
with open('/home/lotufo/ccsym.pkl','rb') as fhandle:
flist = pickle.load(fhandle)
except:
print('arquivo não encontrado')
print(len(flist[0]),len(flist[1]))
cclist_ok = flist[0]
cclist_false = flist[1]
for cok in cclist_ok:
print(type(cok))
Explanation: Exercícios para a próxima aula
isccsym
isccsym usando fatiamento e o conjunto de testes no arquivo pickle ccsym.pkl
End of explanation |
8,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
网络科学理论简介
天涯论坛的回帖网络分析
王成军
[email protected]
计算传播网 http
Step1: Extract @
Step2: @贾也2012-10-297 | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
dtt = []
with open('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_network.txt', 'r') as f:
for line in f:
pnum, link, time, author_id, author, content = line.replace('\n', '').split('\t')
dtt.append([pnum, link, time, author_id, author, content])
len(dtt)
import pandas as pd
dt = pd.DataFrame(dtt)
dt=dt.rename(columns = {0:'page_num', 1:'link', 2:'time', 3:'author',4:'author_name', 5:'reply'})
dt[:5]
# extract date from datetime
date = map(lambda x: x[:10], dt.time)
dt['date'] = pd.to_datetime(date)
dt[:5]
import pandas as pd
df = pd.read_csv('/Users/chengjun/github/cjc2016/data/tianya_bbs_threads_list.txt', sep = "\t", header=None)
df=df.rename(columns = {0:'title', 1:'link', 2:'author',3:'author_page', 4:'click', 5:'reply', 6:'time'})
df[:2]
from collections import defaultdict
link_user_dict = defaultdict(list)
for i in range(len(dt)):
link_user_dict[dt.link[i]].append(dt.author[i])
df['user'] = [len(link_user_dict[l]) for l in df.link]
df[:2]
import statsmodels.api as sm
import numpy as np
x = np.log(df.user+1)
y = np.log(df.reply+1)
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant,beta = res.params
r2 = res.rsquared
fig = plt.figure(figsize=(8, 4),facecolor='white')
plt.plot(df.user, df.reply, 'rs', label= 'Data')
plt.plot(np.exp(x), np.exp(constant + x*beta),"-", label = 'Fit')
plt.yscale('log');plt.xscale('log')
plt.xlabel(r'$Users$', fontsize = 20)
plt.ylabel(r'$Replies$', fontsize = 20)
plt.text(max(df.user)/300,max(df.reply)/20,
r'$\beta$ = ' + str(round(beta,2)) +'\n' + r'$R^2$ = ' + str(round(r2, 2)))
plt.legend(loc=2,fontsize=10, numpoints=1)
plt.axis('tight')
plt.show()
x = np.log(df.user+1)
y = np.log(df.click+1)
xx = sm.add_constant(x, prepend=True)
res = sm.OLS(y,xx).fit()
constant,beta = res.params
r2 = res.rsquared
fig = plt.figure(figsize=(8, 4),facecolor='white')
plt.plot(df.user, df.click, 'rs', label= 'Data')
plt.plot(np.exp(x), np.exp(constant + x*beta),"-", label = 'Fit')
plt.yscale('log');plt.xscale('log')
plt.xlabel(r'$Users$', fontsize = 20)
plt.ylabel(r'$Replies$', fontsize = 20)
plt.text(max(df.user)/300,max(df.click)/20,
r'$\beta$ = ' + str(round(beta,2)) +'\n' + r'$R^2$ = ' + str(round(r2, 2)))
plt.legend(loc=2,fontsize=10, numpoints=1)
plt.axis('tight')
plt.show()
# convert str to datetime format
dt.time = pd.to_datetime(dt.time)
dt['month'] = dt.time.dt.month
dt['year'] = dt.time.dt.year
dt['day'] = dt.time.dt.day
type(dt.time[0])
d = dt.year.value_counts()
dd = pd.DataFrame(d)
dd = dd.sort_index(axis=0, ascending=True)
ds = dd.cumsum()
def getDate(dat):
dat_date_str = map(lambda x: str(x) +'-01-01', dat.index)
dat_date = pd.to_datetime(dat_date_str)
return dat_date
ds.date = getDate(ds)
dd.date = getDate(dd)
fig = plt.figure(figsize=(12,5))
plt.plot(ds.date, ds.year, 'g-s', label = '$Cumulative\: Number\:of\: Threads$')
plt.plot(dd.date, dd.year, 'r-o', label = '$Yearly\:Number\:of\:Threads$')
#plt.yscale('log')
plt.legend(loc=2,numpoints=1,fontsize=13)
plt.show()
Explanation: 网络科学理论简介
天涯论坛的回帖网络分析
王成军
[email protected]
计算传播网 http://computational-communication.com
End of explanation
dt.reply[:55]
Explanation: Extract @
End of explanation
import re
tweet = u"//@lilei: dd //@Bob: cc//@Girl: dd//@魏武: \
利益所致 自然念念不忘//@诺什: 吸引优质 客户,摆脱屌丝男!!!//@MarkGreene: 转发微博"
RTpattern = r'''//?@(\w+)'''
for word in re.findall(RTpattern, tweet, re.UNICODE):
print word
RTpattern = r'''@(\w+)\s'''
tweet = u"@lilei: dd @Bob: cc @Girl: dd @魏武: \
利益所致 自然念念不忘 //@诺什: 吸引优质 客户,摆脱屌丝男!!!"
for word in re.findall(RTpattern, tweet, re.UNICODE):
print word # dt.reply[11].decode('utf8'), re.UNICODE)
if re.findall(RTpattern, dt.reply[0].decode('utf8'), re.UNICODE):
print True
else:
print False
for k, tweet in enumerate(dt.reply[:100]):
tweet = tweet.decode('utf8')
RTpattern = r'''@(\w+)\s'''
for person in re.findall(RTpattern, tweet, re.UNICODE):
print k,'\t',dt.author_name[k],'\t', person,'\t\t', tweet[:30]
print dt.reply[80]
link_author_dict = {}
for i in range(len(df)):
link_author_dict[df.link[i]] =df.author[i]
graph = []
for k, tweet in enumerate(dt.reply):
tweet = tweet.decode('utf8')
url = dt.link[k]
RTpattern = r'''@(\w+)\s'''
persons = re.findall(RTpattern, tweet, re.UNICODE)
if persons:
for person in persons:
graph.append([dt.author_name[k].decode('utf8'), person])
else:
graph.append( [dt.author_name[k].decode('utf8'), link_author_dict[url].decode('utf8')] )
len(graph)
for x, y in graph[:3]:
print x, y
import networkx as nx
G = nx.DiGraph()
for x,y in graph:
if x != y:
G.add_edge(x,y)
nx.info(G)
GU=G.to_undirected(reciprocal=True)
graphs = list(nx.connected_component_subgraphs(GU))
import numpy as np
size = []
for i in graphs:
size.append(len(i.nodes()))
len(size), np.max(size)
gs = []
for i in graphs:
if len(i.nodes()) >5:
gs.append(i)
len(gs)
for g in gs:
print len(g.nodes())
g_max = gs[0]
len(g_max.nodes())
pos = nx.spring_layout(g_max)
#定义一个布局,此处采用了spectral布局方式,后变还会介绍其它布局方式,注意图形上的区别
nx.draw(g_max,pos,with_labels=False,node_size = 30)
#绘制规则图的图形,with_labels决定节点是非带标签(编号),node_size是节点的直径
plt.show() #显示图形
with open('/Users/chengjun/github/cjc2016/data/tianya_network_120.csv', 'a') as f:
for x, y in g_max.edges():
f.write(x.encode('utf8') + ',' + y.encode('utf8') + '\n')
Explanation: @贾也2012-10-297:59:00 导语:人人宁波,面朝大海,春暖花开 ........
@兰质薰心2012-10-2908:55:52 楼主好文! 相信政府一定有能力解决好这些...
回复第20楼,@rual_f “我相信官场中,许多官员应该葆有社会正能量” 通篇好文,顶...
End of explanation |
8,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro
Data comes in many different forms
Step1: With the model loaded, you can process text like this
Step2: There's a lot you can do with the doc object you just created.
Tokenizing
This returns a document object that contains tokens. A token is a unit of text in the document, such as individual words and punctuation. SpaCy splits contractions like "don't" into two tokens, "do" and "n't". You can see the tokens by iterating through the document.
Step3: Iterating through a document gives you token objects. Each of these tokens comes with additional information. In most cases, the important ones are token.lemma_ and token.is_stop.
Text preprocessing
There are a few types of preprocessing to improve how we model with words. The first is "lemmatizing."
The "lemma" of a word is its base form. For example, "walk" is the lemma of the word "walking". So, when you lemmatize the word walking, you would convert it to walk.
It's also common to remove stopwords. Stopwords are words that occur frequently in the language and don't contain much information. English stopwords include "the", "is", "and", "but", "not".
With a spaCy token, token.lemma_ returns the lemma, while token.is_stop returns a boolean True if the token is a stopword (and False otherwise).
Step4: Why are lemmas and identifying stopwords important? Language data has a lot of noise mixed in with informative content. In the sentence above, the important words are tea, healthy and calming. Removing stop words might help the predictive model focus on relevant words. Lemmatizing similarly helps by combining multiple forms of the same word into one base form ("calming", "calms", "calmed" would all change to "calm").
However, lemmatizing and dropping stopwords might result in your models performing worse. So you should treat this preprocessing as part of your hyperparameter optimization process.
Pattern Matching
Another common NLP task is matching tokens or phrases within chunks of text or whole documents. You can do pattern matching with regular expressions, but spaCy's matching capabilities tend to be easier to use.
To match individual tokens, you create a Matcher. When you want to match a list of terms, it's easier and more efficient to use PhraseMatcher. For example, if you want to find where different smartphone models show up in some text, you can create patterns for the model names of interest. First you create the PhraseMatcher itself.
Step5: The matcher is created using the vocabulary of your model. Here we're using the small English model you loaded earlier. Setting attr='LOWER' will match the phrases on lowercased text. This provides case insensitive matching.
Next you create a list of terms to match in the text. The phrase matcher needs the patterns as document objects. The easiest way to get these is with a list comprehension using the nlp model.
Step6: Then you create a document from the text to search and use the phrase matcher to find where the terms occur in the text.
Step7: The matches here are a tuple of the match id and the positions of the start and end of the phrase. | Python Code:
import spacy
nlp = spacy.load('en_core_web_sm')
Explanation: Intro
Data comes in many different forms: time stamps, sensor readings, images, categorical labels, and so much more. But text is still some of the most valuable data out there for those who know how to use it.
In this course about Natural Language Processing (NLP), you will use the leading NLP library (spaCy) to take on some of the most important tasks in working with text.
By the end, you will be able to use spaCy for:
Basic text processing and pattern matching
Building machine learning models with text
Representing text with word embeddings that numerically capture the meaning of words and documents
To get the most out of this course, you'll need some experience with machine learning. If you don't have experience with scikit-learn, check out Intro to Machine Learning and Intermediate Machine Learning to learn the fundamentals.
NLP with spaCy
spaCy is the leading library for NLP, and it has quickly become one of the most popular Python frameworks. Most people find it intuitive, and it has excellent documentation.
spaCy relies on models that are language-specific and come in different sizes. You can load a spaCy model with spacy.load.
For example, here's how you would load the English language model.
End of explanation
doc = nlp("Tea is healthy and calming, don't you think?")
Explanation: With the model loaded, you can process text like this:
End of explanation
for token in doc:
print(token)
Explanation: There's a lot you can do with the doc object you just created.
Tokenizing
This returns a document object that contains tokens. A token is a unit of text in the document, such as individual words and punctuation. SpaCy splits contractions like "don't" into two tokens, "do" and "n't". You can see the tokens by iterating through the document.
End of explanation
print(f"Token \t\tLemma \t\tStopword".format('Token', 'Lemma', 'Stopword'))
print("-"*40)
for token in doc:
print(f"{str(token)}\t\t{token.lemma_}\t\t{token.is_stop}")
Explanation: Iterating through a document gives you token objects. Each of these tokens comes with additional information. In most cases, the important ones are token.lemma_ and token.is_stop.
Text preprocessing
There are a few types of preprocessing to improve how we model with words. The first is "lemmatizing."
The "lemma" of a word is its base form. For example, "walk" is the lemma of the word "walking". So, when you lemmatize the word walking, you would convert it to walk.
It's also common to remove stopwords. Stopwords are words that occur frequently in the language and don't contain much information. English stopwords include "the", "is", "and", "but", "not".
With a spaCy token, token.lemma_ returns the lemma, while token.is_stop returns a boolean True if the token is a stopword (and False otherwise).
End of explanation
from spacy.matcher import PhraseMatcher
matcher = PhraseMatcher(nlp.vocab, attr='LOWER')
Explanation: Why are lemmas and identifying stopwords important? Language data has a lot of noise mixed in with informative content. In the sentence above, the important words are tea, healthy and calming. Removing stop words might help the predictive model focus on relevant words. Lemmatizing similarly helps by combining multiple forms of the same word into one base form ("calming", "calms", "calmed" would all change to "calm").
However, lemmatizing and dropping stopwords might result in your models performing worse. So you should treat this preprocessing as part of your hyperparameter optimization process.
Pattern Matching
Another common NLP task is matching tokens or phrases within chunks of text or whole documents. You can do pattern matching with regular expressions, but spaCy's matching capabilities tend to be easier to use.
To match individual tokens, you create a Matcher. When you want to match a list of terms, it's easier and more efficient to use PhraseMatcher. For example, if you want to find where different smartphone models show up in some text, you can create patterns for the model names of interest. First you create the PhraseMatcher itself.
End of explanation
terms = ['Galaxy Note', 'iPhone 11', 'iPhone XS', 'Google Pixel']
patterns = [nlp(text) for text in terms]
matcher.add("TerminologyList", patterns)
Explanation: The matcher is created using the vocabulary of your model. Here we're using the small English model you loaded earlier. Setting attr='LOWER' will match the phrases on lowercased text. This provides case insensitive matching.
Next you create a list of terms to match in the text. The phrase matcher needs the patterns as document objects. The easiest way to get these is with a list comprehension using the nlp model.
End of explanation
# Borrowed from https://daringfireball.net/linked/2019/09/21/patel-11-pro
text_doc = nlp("Glowing review overall, and some really interesting side-by-side "
"photography tests pitting the iPhone 11 Pro against the "
"Galaxy Note 10 Plus and last year’s iPhone XS and Google Pixel 3.")
matches = matcher(text_doc)
print(matches)
Explanation: Then you create a document from the text to search and use the phrase matcher to find where the terms occur in the text.
End of explanation
match_id, start, end = matches[0]
print(nlp.vocab.strings[match_id], text_doc[start:end])
Explanation: The matches here are a tuple of the match id and the positions of the start and end of the phrase.
End of explanation |
8,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1>Natural Language Preprocessing</h1>
<br>
<em><b>Gregory Antell & Emily Halket</b></em>
<br>
<em><b>December, 2016</b></em>
This notebook provides a brief overview of common steps taken natural language preprocessing. The goal is to get you started thinking about how to process your data, not to provide a formal pipeline. (add another few background sentences here)
<p>Preprocessing follows a general series of steps, each requiring decisions that can substantially impact the final output if not considered carefully. For this tutorial, we will be emphasizing how different sources of text require different approaches for preprocessing and modeling. As you approach your own data, think about the implications of each decision on the outcome of your analysis.</p>
<h2>Requirements</h2>
<p>This tutorial requires several commonly used Python packages for data analysis and Natural Language Processing (NLP)
Step1: <h2>Data</h2>
<p>Here we will be exploring two different data sets
Step2: Let's take a peek at the raw text of this article to see what we are dealing with!
Right off the bat you can see that we have a mixture of uppercase and lowercase words, punctuation, and some character encoding.
Step3: <h2>Preprocessing Text</h2>
<p> After looking at our raw text, we know that there are a number of textual attributes that we will need to address before we can ultimately represent our text as quantified features. Using some built in string functions, we can address the character encoding and mixed capitalization.
Step4: <h3>1. Tokenization</h3>
<p>In order to process text, it must be deconstructed into its constituent elements through a process termed <b><em>tokenization</em></b>. Often, the <b><em>tokens</em></b> yielded from this process are individual words in a document. Tokens represent the linguistic units of a document.</p>
<p>A simplistic way to tokenize text relies on white space, such as in <code>nltk.tokenize.WhitespaceTokenizer</code>. Relying on white space, however, does not take <b>punctuation</b> into account, and depending on this some tokens will include punctuation and will require further preprocessing (e.g. 'account,'). Depending on your data, the punctuation may provide meaningful information, so you will want to think about whether it should be preserved or if it can be removed. Tokenization is particularly challenging in the biomedical field, where many phrases contain substantial punctuation (parentheses, hyphens, etc.) and negation detection is critical.</p>
<p>NLTK contains many built-in modules for tokenization, such as <code>nltk.tokenize.WhitespaceTokenizer</code> and <code>nltk.tokenize.RegexpTokenizer</code>.
<p>See also
Step5: Example
Step6: <h3>2. Stop Words</h3>
<p>Depending on the application, many words provide little value when building an NLP model. Accordingly, these are termed <b><em>stop words</em></b>. Examples of stop words include pronouns, articles, prepositions and conjunctions, but there are many other words, or non meaningful tokens, that you may wish to remove. For instance, there may be artifacts from the web scraping process that you need to remove. </p>
<p>Stop words can be determined and handled in many different ways, including
Step7: Let's remove the stop words and compare to our original list of tokens from our regular expression tokenizer.
Step8: You can see that by removing stop words, we now have less than half the number of tokens as our original list. Taking a peek at the cleaned tokens, we can see that a lot of the information that makes the sentence read like something a human would expect has been lost but the key nouns, verbs, adjectives, and adverbs remain.
Step9: You may notice from looking at this sample, however, that a potentially meaningful word has been removed
Step10: While <b><em>stemming</em></b> is a heuristic process that selectively removes the end of words, <b><em>lemmatization</em></b> is a more sophisticated process that takes into account variables such as part-of-speech, meaning, and context within a document or neighboring sentences.</p>
Step11: <p>In this example, lemmatization retains a bit more information than stemming. Within stemming, the Lancaster method is more aggressive than Porter and Snowball. Remember that this step allows us to reduce words to a common base form so that we can reduce our feature space and perform counting of occurrences. It will depend on your data and your application as to how much information you need to retain. </p>
<p>See also
Step12: Let's take a look at a sample of our stemmed tokens
Step13: In contrast, here are the same tokens in their lemmatized form
Step14: <h3>4. Vectorization </h3>
<p> Often in natural language processing we want to represent our text as a quantitative set of features for subsequent analysis. One way to generate features from text is to count the occurrences words. This apporoach is often referred to as a bag of words approach.</p>
<p>In the example of our article, we could represent the article as a vector of counts for each token. If we did the same for all of the other articles, we would have a set of vectors with each vector representing an article. If we had only one article, then we could have split the article into sentences and then represented each sentence as a vector. </p>
<p>If we apply a count vectorizer to our article, we will have a vector with the length of the number of unique tokens. </p>
Example | Python Code:
# import requirements
import pandas as pd
import nltk
import gensim
import spacy
Explanation: <h1>Natural Language Preprocessing</h1>
<br>
<em><b>Gregory Antell & Emily Halket</b></em>
<br>
<em><b>December, 2016</b></em>
This notebook provides a brief overview of common steps taken natural language preprocessing. The goal is to get you started thinking about how to process your data, not to provide a formal pipeline. (add another few background sentences here)
<p>Preprocessing follows a general series of steps, each requiring decisions that can substantially impact the final output if not considered carefully. For this tutorial, we will be emphasizing how different sources of text require different approaches for preprocessing and modeling. As you approach your own data, think about the implications of each decision on the outcome of your analysis.</p>
<h2>Requirements</h2>
<p>This tutorial requires several commonly used Python packages for data analysis and Natural Language Processing (NLP):</p>
<ul>
<li><b>Pandas: </b>for data structures and analysis in Python
<li><b>NLTK: </b>Natural Language Toolkit
<li><b>gensim: </b>for topic modelling
</ul>
End of explanation
# read subset of data from csv file into panadas dataframe
df = pd.read_csv('1_100.csv')
# for now, chosing one article to illustrate preprocessing
article = df['full_text'][939]
Explanation: <h2>Data</h2>
<p>Here we will be exploring two different data sets:</p>
<ol>
<li>New York Times op-eds
<li>Stack Overflow questions and comments
</ol>
<p>While the New York Times data set consists of traditional English prose and substantially longer articles, the Stack Overflow data set is vastly different. It contains <b> Finish statement later? Also, this part may want to be moved to a second section where we actually do the comparison </b></p>
<p>In this repository, there is a subset of 100 op-ed articles from the New York Times. We will read these articles into a data frame. We will start off by looking at one article to illustrate the steps of preprocessing, and then we will compare both data sets to illustrate how the process is informed by the nature of the data. </p>
End of explanation
article[:500]
Explanation: Let's take a peek at the raw text of this article to see what we are dealing with!
Right off the bat you can see that we have a mixture of uppercase and lowercase words, punctuation, and some character encoding.
End of explanation
article[:500].decode('utf-8').lower()
Explanation: <h2>Preprocessing Text</h2>
<p> After looking at our raw text, we know that there are a number of textual attributes that we will need to address before we can ultimately represent our text as quantified features. Using some built in string functions, we can address the character encoding and mixed capitalization.
End of explanation
from nltk.tokenize import WhitespaceTokenizer
ws_tokenizer = WhitespaceTokenizer()
# tokenize example document
nyt_ws_tokens = ws_tokenizer.tokenize(article.decode('utf-8').lower())
print nyt_ws_tokens[:75]
Explanation: <h3>1. Tokenization</h3>
<p>In order to process text, it must be deconstructed into its constituent elements through a process termed <b><em>tokenization</em></b>. Often, the <b><em>tokens</em></b> yielded from this process are individual words in a document. Tokens represent the linguistic units of a document.</p>
<p>A simplistic way to tokenize text relies on white space, such as in <code>nltk.tokenize.WhitespaceTokenizer</code>. Relying on white space, however, does not take <b>punctuation</b> into account, and depending on this some tokens will include punctuation and will require further preprocessing (e.g. 'account,'). Depending on your data, the punctuation may provide meaningful information, so you will want to think about whether it should be preserved or if it can be removed. Tokenization is particularly challenging in the biomedical field, where many phrases contain substantial punctuation (parentheses, hyphens, etc.) and negation detection is critical.</p>
<p>NLTK contains many built-in modules for tokenization, such as <code>nltk.tokenize.WhitespaceTokenizer</code> and <code>nltk.tokenize.RegexpTokenizer</code>.
<p>See also:
<br>
<a href=https://www.ibm.com/developerworks/community/blogs/nlp/entry/tokenization?lang=en>The Art of Tokenization</a></p>
<a href=https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4231086/>Negation's Not Solved: Generalizability Versus Optimizability in Clinical Natural Language Processing</a></p>
Example: Whitespace Tokenization
Here we apply the Whitespace Tokenizer on the sample article. Notice that we are again decoding characters (such as quotation marks) and using all lowercase characters. Because we used white space as the marker between tokens, we still have punctuation (e.g. 'life.' and '\u201cif')
End of explanation
from nltk.tokenize import RegexpTokenizer
re_tokenizer = RegexpTokenizer(r'\w+')
nyt_re_tokens = re_tokenizer.tokenize(article.decode('utf-8').lower())
print nyt_re_tokens[:100]
Explanation: Example: Regular Expression Tokenization
By applying the regular expression tokenizer we can return a list of word tokens without punctuation.
End of explanation
from nltk.corpus import stopwords
# print the first 5 standard English stop words
stop_list = [w for w in stopwords.words('english')]
print stop_list[:5]
# print the type of the elements in the stop words list
print type(stop_list[0])
Explanation: <h3>2. Stop Words</h3>
<p>Depending on the application, many words provide little value when building an NLP model. Accordingly, these are termed <b><em>stop words</em></b>. Examples of stop words include pronouns, articles, prepositions and conjunctions, but there are many other words, or non meaningful tokens, that you may wish to remove. For instance, there may be artifacts from the web scraping process that you need to remove. </p>
<p>Stop words can be determined and handled in many different ways, including:
<ul>
<li>Using a list of words determined <em>a priori</em>, either a standard list from the NLTK package or one modified from such a list based on domain knowledge of a particular subject
<br><br>
<li>Sorting the terms by <b><em>collection frequency</em></b> (the total number of times each term appears in the document collection), and then to taking the most frequent terms as a stop list based on semantic content.
<br><br>
<li>Using no defined stop list at all, and dealing with text data in a purely statistical manner. In general, search engines do not use stop lists.
</ul>
As you work with your text, you may decide to iterate on this process. See also: <a href=http://nlp.stanford.edu/IR-book/html/htmledition/dropping-common-terms-stop-words-1.html>Stop Words</a>
#### Example: Stopword Corpus
For this example, we will use the english stopword corpus from NLTK.
End of explanation
cleaned_tokens = []
stop_words = set(stopwords.words('english'))
for token in nyt_re_tokens:
if token not in stop_words:
cleaned_tokens.append(token)
print 'Number of tokens before removing stop words: %d' % len(nyt_re_tokens)
print 'Number of tokens after removing stop words: %d' % len(cleaned_tokens)
Explanation: Let's remove the stop words and compare to our original list of tokens from our regular expression tokenizer.
End of explanation
print cleaned_tokens[:50]
Explanation: You can see that by removing stop words, we now have less than half the number of tokens as our original list. Taking a peek at the cleaned tokens, we can see that a lot of the information that makes the sentence read like something a human would expect has been lost but the key nouns, verbs, adjectives, and adverbs remain.
End of explanation
from nltk.stem.porter import PorterStemmer
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.lancaster import LancasterStemmer
porter = PorterStemmer()
snowball = SnowballStemmer('english')
lancaster = LancasterStemmer()
print 'Porter Stem of "explanation": %s' % porter.stem('explanation')
print 'Porter2 (Snowball) Stem of "explanation": %s' %snowball.stem('explanation')
print 'Lancaster Stem of "explanation": %s' %lancaster.stem('explanation')
Explanation: You may notice from looking at this sample, however, that a potentially meaningful word has been removed: 'not'. This stopword corpus includes the words 'no', 'nor', and 'not'and so by removing these words we have removed negation.
<h3>3. Stemming and Lemmatization</h3>
<b> I think we might want to beef up the explanation here a little bit more. Also, do we want to go into POS tagging? </b>
<p>The overarching goal of stemming and lemmatization is to reduce differential forms of a word to a common base form. This step will allow you to count occurrences of words in the vectorization step. In deciding how to reduce the differential forms of words, you will want to consider how much information you will need to retain for your application. For instance, in many cases markers of tense and plurality are not informative, and so removing these markers will allow you to reduce the number of features.</p>
<p> <b>Stemming</b> is the process of representing the word as its root word while removing inflection. For example, the stem of the word 'explained' is 'explain'. By passing this word through the stemmer you would remove the tense inflection. There are multiple approaches to stemming: Porter stemming, Porter2 (snowball) stemming, and Lancaster stemming. You can read more in depth about these approaches.</p>
End of explanation
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
print lemmatizer.lemmatize('explanation')
Explanation: While <b><em>stemming</em></b> is a heuristic process that selectively removes the end of words, <b><em>lemmatization</em></b> is a more sophisticated process that takes into account variables such as part-of-speech, meaning, and context within a document or neighboring sentences.</p>
End of explanation
stemmed_tokens = []
lemmatized_tokens = []
for token in cleaned_tokens:
stemmed_tokens.append(stemmer.stem(token))
lemmatized_tokens.append(lemmatizer.lemmatize(token))
Explanation: <p>In this example, lemmatization retains a bit more information than stemming. Within stemming, the Lancaster method is more aggressive than Porter and Snowball. Remember that this step allows us to reduce words to a common base form so that we can reduce our feature space and perform counting of occurrences. It will depend on your data and your application as to how much information you need to retain. </p>
<p>See also: <a href=http://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html>Stemming and lemmatization</a></p>
Example: Stemming and Lemmatization
To illustrate the difference between stemming and lemmatization, we will apply both methods to our articles.
End of explanation
print stemmed_tokens[:50]
Explanation: Let's take a look at a sample of our stemmed tokens
End of explanation
print lemmatized_tokens[:50]
Explanation: In contrast, here are the same tokens in their lemmatized form
End of explanation
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
stemmed_article = ' '.join(wd for wd in stemmed_tokens)
article_vect = vectorizer.fit_transform([stemmed_article])
Explanation: <h3>4. Vectorization </h3>
<p> Often in natural language processing we want to represent our text as a quantitative set of features for subsequent analysis. One way to generate features from text is to count the occurrences words. This apporoach is often referred to as a bag of words approach.</p>
<p>In the example of our article, we could represent the article as a vector of counts for each token. If we did the same for all of the other articles, we would have a set of vectors with each vector representing an article. If we had only one article, then we could have split the article into sentences and then represented each sentence as a vector. </p>
<p>If we apply a count vectorizer to our article, we will have a vector with the length of the number of unique tokens. </p>
Example: Count Vectorization of Article
For this example we will use the stemmed tokens from our article. We will need to join the tokens together to represent one article.
Check out the documentation for CountVectorizer in scikit-learn. You will see that there are a number of parameters that you can specify - including the maximum number of features. Depending on your data, you may choose to restrict the number of features by removing words that appear with least frequency.
End of explanation |
8,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 1
Imports
Step1: Fitting a quadratic curve
For this problem we are going to work with the following model
Step2: First, generate a dataset using this model using these parameters and the following characteristics
Step3: Now fit the model to the dataset to recover estimates for the model's parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 1
Imports
End of explanation
a_true = 0.5
b_true = 2.0
c_true = -4.0
Explanation: Fitting a quadratic curve
For this problem we are going to work with the following model:
$$ y_{model}(x) = a x^2 + b x + c $$
The true values of the model parameters are as follows:
End of explanation
N=30
SD=2.0
x = np.linspace(-5,5,N)
y =a_true*x**2 + b_true*x + c_true +np.random.normal(0,SD,N)
plt.scatter(x,y)
assert True # leave this cell for grading the raw data generation and plot
Explanation: First, generate a dataset using this model using these parameters and the following characteristics:
For your $x$ data use 30 uniformly spaced points between $[-5,5]$.
Add a noise term to the $y$ value at each point that is drawn from a normal distribution with zero mean and standard deviation 2.0. Make sure you add a different random number to each point (see the size argument of np.random.normal).
After you generate the data, make a plot of the raw data (use points).
End of explanation
def ymodel(x,a,b,c):
return a*x**2 + b*x + c
theta_best, theta_cov = opt.curve_fit(ymodel, x, y, sigma=SD)
print('a = {0:.3f} +/- {1:.3f}'.format(theta_best[0], np.sqrt(theta_cov[0,0])))
print('b = {0:.3f} +/- {1:.3f}'.format(theta_best[1], np.sqrt(theta_cov[1,1])))
print('c = {0:.3f} +/- {1:.3f}'.format(theta_best[2], np.sqrt(theta_cov[2,2])))
x=np.linspace(-5,5,30)
yfit = theta_best[0]*x**2 + theta_best[1]*x + theta_best[2]
plt.figure(figsize=(10,6,))
plt.plot(x, yfit)
plt.errorbar(x, y, 2.0,fmt='.k', ecolor='lightgray')
plt.xlabel('x')
plt.ylabel('y')
plt.box(False)
plt.ylim(-10,25)
plt.title('Best fit')
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now fit the model to the dataset to recover estimates for the model's parameters:
Print out the estimates and uncertainties of each parameter.
Plot the raw data and best fit of the model.
End of explanation |
8,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mineral NER using Data Programming
Project
Step1: Labelling functions
Step2: Distant supervision
Get list of known minerals for distant supervision
Step3: Fitting the generative models
Step4: Label development set for evaluation
Step5: Part 5 | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from snorkel import SnorkelSession
import os
import numpy as np
import re
import codecs
os.environ['SNORKELDB'] = 'sqlite:///snorkel-mte.db'
# Open Session
session = SnorkelSession()
# Read input
base_dir = '/Users/thammegr/work/mte/data/newcorpus/MTE-corpus-open/'
def scan_docs(dir):
txt_filter = lambda _: re.match("^[0-9]{4}\.txt$", _)
for root, dirs, files in os.walk(dir):
for f in filter(txt_filter, files):
txt_path = os.path.join(root, f)
ann_path = txt_path.replace('.txt', '.ann')
parts = ann_path.split(os.path.sep)
parts[-2] += "-reviewed-target" # directory name
new_ann_path = os.path.sep.join(parts)
if os.path.exists(new_ann_path):
ann_path = new_ann_path
yield (txt_path, ann_path)
corpus_file = "mte-corpus.list"
with open(corpus_file, 'w') as f:
count = 0
for rec in scan_docs(base_dir):
f.write(",".join(rec))
f.write("\n")
count += 1
print("Wrote %d records to %s" %(count, corpus_file))
# sample 100 docs to setup whole pipeline first
!head -30 mte-corpus.list > mte-corpus-head.list
corpus_file = "mte-corpus-head.list"
!wc -l *.list
from snorkel.parser import CSVPathsPreprocessor
doc_preprocessor = CSVPathsPreprocessor(path=corpus_file, column=0, delim=',')
#doc_preprocessor = CSVPathsPreprocessor("paths-sample.list")
# Corpus parser to get features
from snorkel.parser import CorpusParser
corpus_parser = CorpusParser()
%time corpus_parser.apply(doc_preprocessor)
from snorkel.models import Document, Sentence
print "Documents:", session.query(Document).count()
print "Sentences:", session.query(Sentence).count()
# Schema for Minerals
from snorkel.models import candidate_subclass
Mineral = candidate_subclass('Mineral', ['name'])
from snorkel.candidates import Ngrams, CandidateExtractor
from snorkel.matchers import RegexMatchEach
mineral_matcher = RegexMatchEach(attrib='pos_tags', rgx="NN.*")
ngrams = Ngrams(n_max=3)
cand_extractor = CandidateExtractor(Mineral,
[ngrams], [mineral_matcher],
symmetric_relations=False)
# Counts number of nouns in a sentence => could be used for filtering
def number_of_nouns(sentence):
active_sequence = False
count = 0
last_tag = ''
for tag in sentence.pos_tags:
if tag.startswith('NN') and not active_sequence:
active_sequence = True
count += 1
elif not tag.startswith('NN') and active_sequence:
active_sequence = False
return count
from snorkel.models import Document
# load, filter and split the sentences
docs = session.query(Document).order_by(Document.name).all()
ld = len(docs)
train_sents = set()
dev_sents = set()
test_sents = set()
splits = (0.9, 0.95)
for i,doc in enumerate(docs):
for s in doc.sentences:
if number_of_nouns(s) > 0:
if i < splits[0] * ld:
train_sents.add(s)
elif i < splits[1] * ld:
dev_sents.add(s)
else:
test_sents.add(s)
s1 = session.query(Sentence).all()[26]
s1.pos_tags
cand_extractor.apply(train_sents, split=0)
train_cands = session.query(Mineral).filter(Mineral.split == 0).all()
print "Number of candidates:", len(train_cands)
# inspect the candidates using this widget
from snorkel.viewer import SentenceNgramViewer
sv = SentenceNgramViewer(train_cands[:300], session)
sv
# Develop and Tests
## Develop and Test
for i, sents in enumerate([dev_sents, test_sents]):
cand_extractor.apply(sents, split=i+1)
print "Number of candidates:", session.query(Mineral).filter(Mineral.split == i+1).count()
Explanation: Mineral NER using Data Programming
Project:: Mars Target Encyclopedia
This notebook does not explain much, however, the exaplanations are found in the original notebook(s) https://github.com/HazyResearch/snorkel/tree/master/tutorials/intro
Setup:
Follow instructions in https://github.com/HazyResearch/snorkel
Start jupyter notebook server using ./run.sh as described in snorkel README
copy this notebook to a place accessible from the jupyter server started in previous step. Perhaps symlink your directory
End of explanation
# Distance supervision
minerals_file = "/Users/thammegr/work/mte/git/ref/minerals.txt"
non_minerals_file = "/Users/thammegr/work/mte/git/ref/non-minerals.txt"
def load_set(path, lower=True):
with codecs.open(path, 'r', 'utf-8') as f:
lines = f.readlines()
lines = map(lambda x: x.strip(), lines)
lines = filter(lambda x: x and not x.startswith('#'), lines)
if lower:
lines = map(lambda x: x.lower(), lines)
return set(lines)
mte_minerals = load_set(minerals_file)
non_minerals = load_set(non_minerals_file)
def lf_dict_mte_minerals(c):
return 1 if c.name.get_span().lower() in mte_minerals else 0
def lf_dict_nonminerals(c):
return -1 if c.name.get_span().lower() in non_minerals else 0
# rule based
def lf_rule_ite_minerals(c):
return 1 if c.name.get_span().lower().endswith('ite') else 0
# rule based 2
ends_ite = re.compile("^[a-z]*[aeiou][a-z]*ite$")
def lf_rule_ite2_minerals(c):
# has one vowel before ite
return 1 if ends_ite.match(c.name.get_span().lower()) is not None else 0
Explanation: Labelling functions
End of explanation
import requests
from lxml import etree
# lxml supports XPath 1.0 which doesnt have regex match function, so extending it
ns = etree.FunctionNamespace(None)
def matches(dummy, val, patrn):
if not val:
return False
return re.match(patrn, str(val[0])) is not None
ns['matches'] = matches
all_minerals_page = "https://en.wikipedia.org/wiki/List_of_minerals"
tree = etree.HTML(requests.get(all_minerals_page).text)
minerals = tree.xpath('//h2[matches(span/@id, "^[A-Z]$")]/following-sibling::*//li/a/@title')
minerals = set(map(lambda x: x.lower().strip(), minerals)) # remove duplicates
print("Found %d minerals in %s" %(len(minerals), all_minerals_page))
minerals_kb = "wikipedia-minerals.list"
with codecs.open(minerals_kb, 'w', 'utf-8') as out:
out.write(u"\n".join(minerals))
print("Stored the mineral names at %s" % minerals_kb)
minerals_kb = "wikipedia-minerals.list"
minerals_set = load_set(minerals_kb)
def lf_dict_wikipedia_minerals(c):
return 1 if c.name.get_span().lower() in minerals_set else 0
# returning 0 instead of -1, because the wikipedia page may not be an exhaustive list.
# TODO: check with Kiri to confirm this
# Debugging label functions
from pprint import pprint
labeled = []
for c in session.query(Mineral).filter(Mineral.split == 0).all():
if lf_rule_ite2_minerals(c) != 0: # function
labeled.append(c)
print "Number labeled:", len(labeled)
labeled[0]
# all labeling functions in a list
LFs = [
lf_dict_mte_minerals, lf_dict_nonminerals,
lf_dict_wikipedia_minerals,
#lf_rule_ite_minerals,
lf_rule_ite2_minerals
]
from snorkel.annotations import LabelAnnotator
import numpy as np
labeler = LabelAnnotator(f=LFs)
np.random.seed(1701)
%time L_train = labeler.apply(split=0)
L_train
# Loading it again -- resume from here
L_train = labeler.load_matrix(session, split=0)
L_train
L_train.get_candidate(session, 0)
L_train.get_key(session, 0)
L_train.lf_stats(session, )
Explanation: Distant supervision
Get list of known minerals for distant supervision
End of explanation
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel()
gen_model.train(L_train, epochs=500, decay=0.95, step_size=0.1/L_train.shape[0], reg_param=1e-6)
train_marginals = gen_model.marginals(L_train)
# visualize
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
gen_model.weights.lf_accuracy()
L_dev = labeler.apply_existing(split=1)
L_dev
Explanation: Fitting the generative models
End of explanation
dev_cands = session.query(Mineral).filter(Mineral.split == 1).all()
len(dev_cands)
from snorkel.viewer import SentenceNgramViewer
sv = SentenceNgramViewer(dev_cands, session)
sv
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name=os.environ['USER'], split=1)
L_gold_dev
tp, fp, tn, fn = gen_model.score(session, L_dev, L_gold_dev)
fn
L_dev.lf_stats(session, L_gold_dev, gen_model.weights.lf_accuracy())
# Save labels
from snorkel.annotations import save_marginals
%time save_marginals(session, L_train, train_marginals)
Explanation: Label development set for evaluation
End of explanation
# generate features
from snorkel.annotations import FeatureAnnotator
featurizer = FeatureAnnotator()
%time F_train = featurizer.apply(split=0)
F_train
%%time
F_dev = featurizer.apply_existing(split=1)
F_test = featurizer.apply_existing(split=2)
from snorkel.learning import SparseLogisticRegression
from snorkel.learning.utils import MentionScorer
from snorkel.learning import RandomSearch, ListParameter, RangeParameter
# our discriminative model
disc_model = SparseLogisticRegression()
#Hyper parameters search
rate_param = RangeParameter('lr', 1e-6, 1e-2, step=1, log_base=10)
l1_param = RangeParameter('l1_penalty', 1e-6, 1e-2, step=1, log_base=10)
l2_param = RangeParameter('l2_penalty', 1e-6, 1e-2, step=1, log_base=10)
searcher = RandomSearch(session, disc_model, F_train, train_marginals, [rate_param, l1_param, l2_param], n=20)
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
# fit
np.random.seed(1701)
searcher.fit(F_dev, L_gold_dev, n_epochs=50, rebalance=0.9, print_freq=25)
#from snorkel.annotations import load_gold_labels
#L_gold_test = load_gold_labels(session, annotator_name='gold', split=2)
#_, _, _, _ = disc_model.score(session, F_test, L_gold_test)
tp, fp, tn, fn = disc_model.score(session, F_dev, L_gold_dev)
vars(F_dev[0])
Explanation: Part 5:
Automatic features
End of explanation |
8,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How-To
Step1: If you have any installation issue, please check our forum or the github issue tracker.
Instantiating a Poppy Humanoid
In this section, we will see how a Poppy Humanoid can be created into V-REP and how we can connect it to a pypot Robot
Step2: You should now see a Poppy in your V-REP window
Step3: You can access a specific motor directly using its name
Step4: If we want to get the current position (in degrees) of a specific motor (e.g. head_y) we can use
Step5: You can also use the list/dict comprehension to retrieve a specific value for all motors.
A list of all current motor positions
Step6: A dictionary of pairs {motor_name
Step7: Motor alias or group of motors
In pypot we use the concept of motor alias which is simply a list of motors grouped together under a specific name. For instance, you can directly access all the motors from the torso using the torso alias. Poppy Humanoid also defines a leg alias, a left arm alias...
Note, that motors used above is just one of the predefined motors alias - one with all attached motors.
You can retrieve the list of motors alias available using
Step8: Each alias contains a list of motors. Thus, you can similarly retrieve all positions for only the motors of the right leg
Step9: Setting registers
In a similar way that you retrieve values from V-REP, you can set a new target position to a motor.
By sending the following command, you should see the robot turns its head of 90°
Step10: Or you can affect new target positions for a group of motors
Step11: It's important to note the difference between the current and goal position. In particular, when setting a new goal position, it will take time before the motor actually reaches the desired position (see section below for an example).
Thus, in the code below only the second instruction will likely have an effect on the robot
Step12: Note
Step13: Now, we make the head move towards -45° in 2 seconds
Step14: Goto position also comes with a wait arguments, so you can easily link motions (wait=True will wait for the movement to finish before executing the next line, while wait=False will send the new target position order and directly jump to the next instruction)
Step15: You can get and set a new goto_behavior through the property
Step16: Read and Write
Let's prepare another example where we will illustrate the difference between present and goal position by applying a sinusoid on a specific motor.
To make sure the robot is in a stable position, we will reset the simulation. This will re-positioned the robot in its initial position
Step17: Now let's make the robot's head moves
Step18: Now we will use the same code but we will record both the current and goal position
Step19: If we plot the two trajectories, we can clearly see a time shift representing the time needed by the motor to reach the desired position
Step20: Similarly, we can observe a goto position using the minimum jerk mode which shows the smooth acceleration and deceleration
Step21: Tracking objects
Using a V-REP simulated robot, you can easily retrieve an object position and orientation. You just need to know its name in the vrep scene.
Note
Step22: By default, the position is in the V-REP scene referential (the zero is somewhere between Poppy Humanoid's feet). You can use any object as referential and thus get the left forearm position related to the head for instance
Step23: This can be used for discovering a reachable space for instance
Step24: This example could be extended to show a simple method to build an inverse model (you build a table with many goals in the search space associated with the motor command which generated it, and for the inverse model you reproduce the motor command of the stored goal closest to the point you want to reach).
This could be a very good exercise where in a specific notebook you describe a simple approach to build and use approximated inverse models.
If you are interested in this kind of experiments and want to go further, you can see the explauto library. It provides a unified framework for autonomous exploration experiment notably using a Poppy Creature. You will find there learning algorithms that can be used to learn forward (e.g. where the end position of an arm is depending on each joints position) and inverse model (finding the joint angles to locate the end effector at a desired position).
Step25: Using primitives
Pypot also comes with the Primitive abstraction. The idea is to write simple behaviors that can be automatically combined to create more complex behaviors. As the primitive is likely to change in the future, and go toward something better defined, we will only show the very basic stuff you can do using primitives.
In more technical details, a primitive is only a thread which have access to all robot sensors and effectors. A primitive manager is used to gather all primitives orders and combined them using a filter (a simple sum by default).
As an example of how to use primitives, we will use one of the predefined primitives, the Sinus
Step26: Primitives are usually instantiated with a robot as the first argument. As Sinus is a LoopPrimitive (i.e. a specific primitive which call an update method at a predefined frequency), you also need to pass the call frequency as the second argument).
The other arguments, here the motors list, the amplitude and the frequency are specific to the Sinus primitive.
Step27: A primitive can be
Step28: Multiples primitives can be runned at the same time
Step29: We will now write a simple motor position logger using a loop primitive
Step30: We will illustrate the combination of primitives by pausing one of them in the middle of the recording
Step31: You can see on the plot above, that the two sinusoids are combined from 0 to 10 and from 25 to 30. From 10 to 25 only one of the sinusoid is applied.
Now we stop all running primitives | Python Code:
from pypot.vrep import from_vrep
from poppy.creatures import PoppyHumanoid
Explanation: How-To: Control a Poppy Humanoid in a Simulator using a Python lib: pypot
<img src="image/vrep-header.png" alt="V-REP header" style="height: 400px;"/>
Introduction
In this notebook, we will present how a simulated Poppy Humanoid - an open-source and 3D printed humanoid robot - can be controlled in real time. The robot will be simulated in V-REP a well known and powerful robot simulator. In this tutorial we will show how to install, use, and program the simulated robot in Python. To do that, we will use the pypot library developed to easily control and program Poppy Creatures.
To install the software tools on your machine, we strongly recommend using Anaconda the scientific python distributions. It comes with all poppy dependencies pre-compiled and works great on Windows, Mac, Linux! We advise you to use the 2.7 version.
In more details, we will:
* see how we can create a poppy humanoid in the V-REP simulator
* learn how we can read/send values to the motors
* track one or several Poppy's parts 3D position and orientation (e.g. its head)
* write a simple primitive to design higher level behaviors (e.g. a dance motion)
* see how we can reset and tune the simulation
<img src="https://raw.githubusercontent.com/poppy-project/poppy-humanoid/master/doc/img/poppy-humanoid-github.jpg" alt="Poppy Humanoid" style="height: 500px;"/>
Note: Most of the tutorial is redundant with the ones on how to control a "real" poppy creature. In particular, switching from a real robot to a simulated one (and vice versa) can be done just by changing a single line of code (see the appendix at the end of this notebook). Furthermore, most of the notebook can be applied to any Poppy Creature (and even any "pypot robot"), only the instantiation method will change.
Comments, issues, improvements and updates can be sent directly on the dedicated section of the github issue tracker.
What's needed?
First, if you do not know how to run an IPython Notebook please refer to our readme.
To follow this tutorial you will need:
* a Python interpreter (2.7 is recommended but 3.4 or pypy-2.5 should also work). We strongly recommand to use a pre-packaged Python distribution such as Anaconda.
* the V-REP simulator (please directly see v-rep download section for installation details)
* the python pypot library version >= 2.1
* the poppy_humanoid software library >= 1.0
Both V-REP and the pypot/poppy libraries are open source and cross platform.
The pypot and poppy_humanoid library can be installed via pip - a tool for installing Python Package (if you have no idea what pip is or how to run the following command, please refer to our readme first :-)):
bash
pip install pypot poppy_humanoid
You can also install them from the source and then use the classical:
bash
python setup.py install
Note: installing poppy_humanoid will also install pypot as it is one of the depencies.
Checking your installation
To check if everything is installed correctly, you can run the following code. If it runs without raising an error, everything is probably installed correctly:
You can run IPython Notebook code cells by selecting them and clicking the play button or by pressing shift+enter.
End of explanation
from poppy.creatures import PoppyHumanoid
poppy = PoppyHumanoid(simulator='vrep')
Explanation: If you have any installation issue, please check our forum or the github issue tracker.
Instantiating a Poppy Humanoid
In this section, we will see how a Poppy Humanoid can be created into V-REP and how we can connect it to a pypot Robot: i.e. the object used in pypot to represent and communicate with a robot.
First, you will need to launch V-REP (please refer to V-REP documentation if you don't know how to do it). Once it's done you should see something like:
<img src="image/vrep-screenshot.png" alt="V-REP Empty Scene" style="height: 500px;"/>
Instead of loading a specific scene with a Poppy humanoid through the V-REP GUI and then connect to it using pypot, we will directly instantiate the PoppyHumanoid class which will do most of the work for us.
In particular, it will:
* load a V-REP scene with a Poppy Humanoid
* instantiate a pypot Robot and connect it to the simulated Poppy
To do that, we will use the following code:
End of explanation
poppy.motors
Explanation: You should now see a Poppy in your V-REP window:
<img src="image/vrep-poppy.png" alt="V-REP Poppy Humanoid Scene" style="height: 500px;"/>
Note: Be careful that VREP is often displaying pop-up that freezes the communication with pypot. You will have to close them otherwise a timeout will occur!
Controlling motors
As soon as you have instantiated a Robot - in our case through the PoppyHumanoid class - it is synced with the simulation (or the real robot). This means that values from the V-REP simulation (e.g. limbs position) are retrieved from the simu and affected to their equivalent variables by a synchronization loop. Similarly target variables (e.g. motors goal position) are sent to V-REP. This synchronization loop runs at 50Hz by default.
To be more clear, when reading a variable from the poppy object you will obtain the last synced value from V-REP and when setting a new value to a poppy variable it will be automatically sent to V-REP a short time after. You never need to manually sync your instance with the current state of the simulation, it is automatically done by a thread running in background.
Accessing motors registers
Dynamixel motors comes with a lot of registers which are used to store the current state of the robot (its current position, temperature, pid gains...) but also where you can write new target values, for instance a new goal position.
In this section we will see how pypot give you an high-level access to the most frequently used registers (pypot low-level IO gives you an access to all registers but this is beyond the scope of this tutorial).
So, first we will retrieve the list of all available motors. The motors variable contains the list of all motors attached to the current robot.
<img src="https://forum.poppy-project.org/uploads/default/80/267da75bd9feeab2.jpg" alt="Poppy Humnanoid Motors" style="height: 500px;"/>
By default, each motor prints its name, its id, and its current position:
End of explanation
poppy.l_shoulder_y
Explanation: You can access a specific motor directly using its name:
End of explanation
poppy.head_y.present_position
Explanation: If we want to get the current position (in degrees) of a specific motor (e.g. head_y) we can use:
End of explanation
[m.present_position for m in poppy.motors]
Explanation: You can also use the list/dict comprehension to retrieve a specific value for all motors.
A list of all current motor positions:
End of explanation
{m.name: m.present_position for m in poppy.motors}
Explanation: A dictionary of pairs {motor_name: motor_position}:
End of explanation
poppy.alias
Explanation: Motor alias or group of motors
In pypot we use the concept of motor alias which is simply a list of motors grouped together under a specific name. For instance, you can directly access all the motors from the torso using the torso alias. Poppy Humanoid also defines a leg alias, a left arm alias...
Note, that motors used above is just one of the predefined motors alias - one with all attached motors.
You can retrieve the list of motors alias available using:
End of explanation
{m.name: m.present_position for m in poppy.r_leg}
Explanation: Each alias contains a list of motors. Thus, you can similarly retrieve all positions for only the motors of the right leg:
End of explanation
poppy.head_z.goal_position = 90.
Explanation: Setting registers
In a similar way that you retrieve values from V-REP, you can set a new target position to a motor.
By sending the following command, you should see the robot turns its head of 90°:
End of explanation
for m in poppy.l_arm:
m.goal_position = 30.
Explanation: Or you can affect new target positions for a group of motors:
End of explanation
poppy.r_shoulder_x.goal_position = 30
poppy.r_shoulder_x.goal_position = -30
Explanation: It's important to note the difference between the current and goal position. In particular, when setting a new goal position, it will take time before the motor actually reaches the desired position (see section below for an example).
Thus, in the code below only the second instruction will likely have an effect on the robot:
End of explanation
poppy.reset_simulation()
Explanation: Note: While the full list of motor registers is available, not all of them are having an effect in the V-REP simulation. For instance, modifying the pid of a motor won't affect the simulation.
Currently in the V-REP simulator you can use:
present_position (R): the actual position of the motor (usually from -180° to 180°)
goal_position (RW): the target position of the motor, that is to say the position it will try to reach (same range and units than the present position)
present_load (R): the current load applied on the motor (expressed in % of the max supported load)
torque_limit (RW): the maximum torque that a motor can applied (also expressed in % of the max supported load)
compliant (RW): whether the motor is compliant: if it resits or not when manually turned
angle_limit (R): the position limits (lower and upper) of the motor. Some motors are restrained to a smaller position range to avoid breaking other parts.
Support for additional features may be added in future version.
Goto position
You can also use the goto_position method (both at the robot or motor level) to get more control over the trajectory of a motor. In the examples above, when affecting the goal_position the motor will try to reach it as fast as the moving_speed permits it.
At the moment, goto_position comes with two behaviors:
* dummy: just adjust the moving_speed so the goal_position is reached at the predefined timestamp (not always very accurate)
* minjerk: using the minimum jerk to compute a smoother trajectory.
First, let's restart the simulation:
End of explanation
poppy.head_z.goto_position(-45, 2)
Explanation: Now, we make the head move towards -45° in 2 seconds:
End of explanation
poppy.head_z.goto_position(45, 2, wait=False)
poppy.head_y.goto_position(-30, 2, wait=True)
poppy.head_z.goto_position(0, 2, wait=True)
poppy.head_y.goto_position(20, 1, wait=True)
Explanation: Goto position also comes with a wait arguments, so you can easily link motions (wait=True will wait for the movement to finish before executing the next line, while wait=False will send the new target position order and directly jump to the next instruction):
End of explanation
poppy.head_y.goto_behavior
poppy.head_y.goto_behavior = 'dummy'
Explanation: You can get and set a new goto_behavior through the property:
End of explanation
poppy.reset_simulation()
Explanation: Read and Write
Let's prepare another example where we will illustrate the difference between present and goal position by applying a sinusoid on a specific motor.
To make sure the robot is in a stable position, we will reset the simulation. This will re-positioned the robot in its initial position:
End of explanation
import time
import math
amp = 30 # in degrees
freq = 0.5 # in Hz
t0 = time.time()
while True:
t = time.time()
# run for 10s
if t - t0 > 10:
break
poppy.head_z.goal_position = amp * math.sin(2 * 3.14 * freq * t)
time.sleep(0.04)
Explanation: Now let's make the robot's head moves:
End of explanation
current, goal = [], []
t0 = time.time()
while True:
t = time.time()
# run for 5s
if t - t0 > 5:
break
poppy.head_z.goal_position = amp * math.sin(2 * 3.14 * freq * t)
current.append(poppy.head_z.present_position)
goal.append(poppy.head_z.goal_position)
time.sleep(0.04)
Explanation: Now we will use the same code but we will record both the current and goal position:
End of explanation
%pylab inline
t = linspace(0, 5, len(current))
plot(t, goal)
plot(t, current)
legend(('goal', 'current'))
Explanation: If we plot the two trajectories, we can clearly see a time shift representing the time needed by the motor to reach the desired position:
End of explanation
poppy.l_shoulder_x.goto_behavior = 'minjerk'
poppy.l_shoulder_x.goto_position(120, 5)
pos = []
t0 = time.time()
while time.time() - t0 < 5:
pos.append(poppy.l_shoulder_x.present_position)
time.sleep(0.01)
t = linspace(0, 5, len(pos))
plot(t, pos)
poppy.reset_simulation()
Explanation: Similarly, we can observe a goto position using the minimum jerk mode which shows the smooth acceleration and deceleration:
End of explanation
poppy.get_object_position('l_forearm_visual')
Explanation: Tracking objects
Using a V-REP simulated robot, you can easily retrieve an object position and orientation. You just need to know its name in the vrep scene.
Note: at the moment to know the name of object in the vrep scene, you have to look for them in the v-rep window. Hopefully in future version of pypot, you will be able to directly retrieve them.
<img src="image/vrep-finding-names.png" alt="Finding name of objects in a V-REP scene" style="height: 350px;"/>
For instance, to get the 3D position of the left hand, you just have to do:
End of explanation
poppy.get_object_position('l_forearm_visual', 'head_visual')
Explanation: By default, the position is in the V-REP scene referential (the zero is somewhere between Poppy Humanoid's feet). You can use any object as referential and thus get the left forearm position related to the head for instance:
End of explanation
reached_pt = []
for m in poppy.l_arm:
m.goto_behavior = 'minjerk'
# We generate 25 random arm configuration
# and stores the reached position of the forearm
for _ in range(25):
poppy.reset_simulation()
# Generate a position by setting random position (within the angle limit) to each joint
# This can be hacked to define other exploration
pos = {m.name: randint(min(m.angle_limit), max(m.angle_limit)) for m in poppy.l_arm}
poppy.goto_position(pos, 2., wait=True)
reached_pt.append(poppy.get_object_position('l_forearm_visual'))
from mpl_toolkits.mplot3d import Axes3D
ax = axes(projection='3d')
ax.scatter(*array(reached_pt).T)
Explanation: This can be used for discovering a reachable space for instance:
End of explanation
poppy.reset_simulation()
Explanation: This example could be extended to show a simple method to build an inverse model (you build a table with many goals in the search space associated with the motor command which generated it, and for the inverse model you reproduce the motor command of the stored goal closest to the point you want to reach).
This could be a very good exercise where in a specific notebook you describe a simple approach to build and use approximated inverse models.
If you are interested in this kind of experiments and want to go further, you can see the explauto library. It provides a unified framework for autonomous exploration experiment notably using a Poppy Creature. You will find there learning algorithms that can be used to learn forward (e.g. where the end position of an arm is depending on each joints position) and inverse model (finding the joint angles to locate the end effector at a desired position).
End of explanation
from pypot.primitive.utils import Sinus
Explanation: Using primitives
Pypot also comes with the Primitive abstraction. The idea is to write simple behaviors that can be automatically combined to create more complex behaviors. As the primitive is likely to change in the future, and go toward something better defined, we will only show the very basic stuff you can do using primitives.
In more technical details, a primitive is only a thread which have access to all robot sensors and effectors. A primitive manager is used to gather all primitives orders and combined them using a filter (a simple sum by default).
As an example of how to use primitives, we will use one of the predefined primitives, the Sinus:
End of explanation
sin_1 = Sinus(poppy, 25., [poppy.head_z, poppy.head_y], amp=15, freq=.15)
Explanation: Primitives are usually instantiated with a robot as the first argument. As Sinus is a LoopPrimitive (i.e. a specific primitive which call an update method at a predefined frequency), you also need to pass the call frequency as the second argument).
The other arguments, here the motors list, the amplitude and the frequency are specific to the Sinus primitive.
End of explanation
sin_1.start()
Explanation: A primitive can be:
* started (re-started)
* stopped
* paused
* resumed
By running the following code, you should see both motors of the head performs a sinusoid.
End of explanation
sin_2 = Sinus(poppy, 25., [poppy.head_z, ], amp=8, freq=.5)
sin_2.start()
Explanation: Multiples primitives can be runned at the same time:
End of explanation
from pypot.primitive import LoopPrimitive
class MotorPositionLogger(LoopPrimitive):
def __init__(self, robot, refresh_freq, motor):
LoopPrimitive.__init__(self, robot, refresh_freq)
self.motor = getattr(self.robot, motor.name)
# This code will be called each time the primitive is started
def setup(self):
self.positions = []
# This method will be called at the predefined frequency
def update(self):
self.positions.append(self.motor.present_position)
logger = MotorPositionLogger(poppy, 50, poppy.head_z)
Explanation: We will now write a simple motor position logger using a loop primitive:
End of explanation
logger.start()
time.sleep(10)
sin_2.pause()
time.sleep(15)
sin_2.resume()
time.sleep(5)
logger.stop()
pos = logger.positions
plot(linspace(0, 30, len(pos)), pos)
Explanation: We will illustrate the combination of primitives by pausing one of them in the middle of the recording:
End of explanation
for p in poppy.active_primitives:
p.stop()
Explanation: You can see on the plot above, that the two sinusoids are combined from 0 to 10 and from 25 to 30. From 10 to 25 only one of the sinusoid is applied.
Now we stop all running primitives:
End of explanation |
8,887 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimum LCM of all subarrays of length at least 2
Python3 program for the above approach ; Function to find LCM pf two numbers ; Initialise lcm value ; Check for divisibility of a and b by the lcm ; Function to find the Minimum LCM of all subarrays of length greater than 1 ; Store the minimum LCM ; Traverse the array ; Find LCM of consecutive element ; Check if the calculated LCM is less than the minLCM then update it ; Print the minimum LCM ; Given array arr [ ] ; Size of the array ; Function call
| Python Code:
import sys
def LCM(a , b ) :
lcm = a if a > b else b
while(True ) :
if(lcm % a == 0 and lcm % b == 0 ) :
break
else :
lcm += 1
return lcm
def findMinLCM(arr , n ) :
minLCM = sys . maxsize
for i in range(n - 1 ) :
val = LCM(arr[i ] , arr[i + 1 ] )
if(val < minLCM ) :
minLCM = val
print(minLCM )
arr =[4 , 8 , 12 , 16 , 20 , 24 ]
n = len(arr )
findMinLCM(arr , n )
|
8,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Van der Pol oscillator
We will look at the second order differentual equation (see https
Step1: One way to reduce the order of our second order differential equation is to formulate a system of first order ODEs, using
Step2: Equidistant points are not optimal for plotting this function. Using roots kwarg we can make the solver report the output where either the function value, its first or second derivative is zero. | Python Code:
from __future__ import division, print_function
import itertools
import numpy as np
import sympy as sp
import matplotlib.pyplot as plt
from pyodesys.symbolic import SymbolicSys
sp.init_printing()
%matplotlib inline
print(sp.__version__)
Explanation: Van der Pol oscillator
We will look at the second order differentual equation (see https://en.wikipedia.org/wiki/Van_der_Pol_oscillator):
$$
{d^2y_0 \over dx^2}-\mu(1-y_0^2){dy_0 \over dx}+y_0= 0
$$
End of explanation
vdp1 = lambda x, y, p: [y[1], -y[0] + p[0]*y[1]*(1 - y[0]**2)]
y0 = [0, 1]
mu = 2.5
tend = 25
odesys1 = SymbolicSys.from_callback(vdp1, 2, 1, names='y0 y1'.split())
odesys1.exprs
# Let us plot using 30 data points
res1 = odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], name='vode')
res1.plot()
print(res1.yout.shape)
# Let us interpolate between data points
res2 = odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode', nderiv=1)
res2.plot(m_lim=21)
print(res2.yout.shape)
odesys1.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode', nderiv=2)
xplt, yplt = odesys1.plot_result(m_lim=21, interpolate=30)
print(odesys1._internal[1].shape, yplt.shape)
Explanation: One way to reduce the order of our second order differential equation is to formulate a system of first order ODEs, using:
$$ y_1 = \dot y_0 $$
which gives us:
$$
\begin{cases}
\dot y_0 = y_1 \
\dot y_1 = \mu(1-y_0^2) y_1-y_0
\end{cases}
$$
Let's call this system of ordinary differential equations vdp1:
End of explanation
odesys2 = SymbolicSys.from_other(odesys1, roots=odesys1.exprs + (odesys1.dep[0],))
# We could also add a higher derivative: tuple(odesys1.get_jac().dot(odesys1.exprs)))
# Let us plot using 10 data points
res2 = odesys2.integrate(np.linspace(0, tend, 20), y0, [mu], integrator='cvode',
nderiv=1, atol=1e-4, rtol=1e-4)
xout, yout, info = res2
xplt, yplt = odesys2.plot_result(m_lim=21, interpolate=30, indices=[0])
xroots, yroots = info['roots_output'][0], info['roots_output'][1][:, 0]
plt.plot(xroots, yroots, 'bd')
print(odesys2._internal[1].shape, yplt.shape, xroots.size)
odesys2.roots
res2.plot(indices=[0])
plt.plot(xplt, [res2.at(_)[0][0, 0] for _ in xplt])
res1.plot(indices=[0])
plt.plot(xplt, [res1.at(_, use_deriv=True)[0][0] for _ in xplt])
plt.plot(xplt, [res1.at(_, use_deriv=False)[0][0] for _ in xplt])
Explanation: Equidistant points are not optimal for plotting this function. Using roots kwarg we can make the solver report the output where either the function value, its first or second derivative is zero.
End of explanation |
8,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A USA Today article from 2006 includes this sentence
Step1: Average amount of debt is approximately 35,868 dollars but this will take around 10 years to pay off given that interest in compounded annually. Debt for a low income bracket is on average 8K higher than the average (Forbes). This accounts for even people whose parents can afford to pay for college, but may choose not to. In most families with higher net income that is 200,000 dollars plus, the amount of debt is significantly less (close to zero) since parents pay for most of the tuition. | Python Code:
abc = Table.read_table("debt_amt_distribution2014.csv")
abc
def replace(x):
return int(x.replace(",", ""))
bcd = abc.apply(replace, 1)
bcd
#Upper bound is arbitrarily defined based on scale from 100000 to 150000 and 150000 to 200000
new_table_debt = Table().with_columns("Balance 2014 (under $ given)", make_array(5000, 10000, 25000, 50000, 75000, 100000, 150000, 200000, 350000),
"Number of Borrowers", bcd)
new_table_debt
sum(new_table_debt[0] * new_table_debt[1])/sum(new_table_debt[1])
Explanation: A USA Today article from 2006 includes this sentence: “Since 1970, the percentage of people ages 18 to 34 [in the United States] who live at home with their family increased 48%, from 12.5 million to 18.6 million, the Census Bureau says.”
The changes in the US population are relevant to the data in the article since the number of young adults living at home increasing by 48% is to be expected with an increase in population. More people living in cities and suburbs drive prices of homes up, making them less affordable for people starting out their careers. Also, even though family units are becoming smaller, a high influx of immigrants from Asian countries, Latin America, and other places have driven the size of the population up, and in most of those cultures, young adults tend to live at home for a longer period of time, which also affects the percentage increase in the number of people living at home between ages 18 and 34.
End of explanation
table_num_cred_used = Table.read_table("num_data_CRC.csv")
table_num_cred_used.where("group", "Seasonally Adjusted").where("month", are.between(84, 132)).plot("month", "num")
table_vol_cards = Table.read_table("vol_data_CRC.csv")
table_vol_cards.where("group", "Seasonally Adjusted").where("month", are.between(84, 132)).plot("month", "num")
Explanation: Average amount of debt is approximately 35,868 dollars but this will take around 10 years to pay off given that interest in compounded annually. Debt for a low income bracket is on average 8K higher than the average (Forbes). This accounts for even people whose parents can afford to pay for college, but may choose not to. In most families with higher net income that is 200,000 dollars plus, the amount of debt is significantly less (close to zero) since parents pay for most of the tuition.
End of explanation |
8,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create A Priority Queue Object
Step2: Add Items To Queue
Step3: Retrieve Items From Queue By Priority | Python Code:
import heapq
Explanation: Title: Priority Queues
Slug: priority_queues
Summary: Priority Queues Using Python.
Date: 2017-02-02 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Preliminaries
End of explanation
# Create a priority queue abstract base class
class priority_queue:
# Initialize the instance
def __init__(self):
# Create a list to use as the queue
self._queue = []
# Create an index to use as ordering
self._index = 0
# Create a function to add a task to the queue
def add_task(self, item, priority):
# Push the arguments to the _queue using a heap
heapq.heappush(self._queue, (-priority, self._index, item))
# Add one to the index
self._index += 1
# Create a function to get the next item from the queue
def next_task(self):
# Return the next item in the queue
return heapq.heappop(self._queue)[-1]
# Create a priority queue called task_list
task_list = priority_queue()
Explanation: Create A Priority Queue Object
End of explanation
# Add an item to the queue
task_list.add_task('Clean Dishes', 1)
# Add an item to the queue
task_list.add_task('Wash Car', 2)
# Add an item to the queue
task_list.add_task('Walk Dog', 3)
Explanation: Add Items To Queue
End of explanation
# Retrieve items from the queue
task_list.next_task()
# Retrieve items from the queue
task_list.next_task()
# Retrieve items from the queue
task_list.next_task()
Explanation: Retrieve Items From Queue By Priority
End of explanation |
8,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
Step1: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
Step2: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment
Step3: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
Step4: Note
Step5: Let's see what the feature columns look like now
Step6: Train-test split
We split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
Step7: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with
Step8: Checkpoint
Step9: Recall that the classification error is defined as follows
Step10: Checkpoint
Step11: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split
Step12: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions
Step13: Here is a recursive function to count the nodes in your tree
Step14: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step15: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf'
Step16: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
Step17: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows
Step18: Example
Step19: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
Step20: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data
Step21: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question
Step22: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters
Step23: Here is what the first stump looks like
Step24: Here is what the next stump looks like
Step25: If your Adaboost is correctly implemented, the following things should be true
Step26: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula
Step27: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble
Step28: Quiz Question
Step29: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
Step30: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
Step31: Quiz Question
Step32: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations. | Python Code:
import graphlab
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Boosting a decision stump
The goal of this notebook is to implement your own boosting module.
Brace yourselves! This is going to be a fun and challenging assignment.
Use SFrames to do some feature engineering.
Modify the decision trees to incorporate weights.
Implement Adaboost ensembling.
Use your implementation of Adaboost to train a boosted decision stump ensemble.
Evaluate the effect of boosting (adding more decision stumps) on performance of the model.
Explore the robustness of Adaboost to overfitting.
Let's get started!
Fire up GraphLab Create
Make sure you have the latest version of GraphLab Create (1.8.3 or newer). Upgrade by
pip install graphlab-create --upgrade
See this page for detailed instructions on upgrading.
End of explanation
loans = graphlab.SFrame('lending-club-data.gl/')
len(loans)
Explanation: Getting the data ready
We will be using the same LendingClub dataset as in the previous assignment.
End of explanation
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans.remove_column('bad_loans')
target = 'safe_loans'
loans = loans[features + [target]]
Explanation: Extracting the target and the feature columns
We will now repeat some of the feature processing steps that we saw in the previous assignment:
First, we re-assign the target to have +1 as a safe (good) loan, and -1 as a risky (bad) loan.
Next, we select four categorical features:
1. grade of the loan
2. the length of the loan term
3. the home ownership status: own, mortgage, rent
4. number of years of employment.
End of explanation
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
risky_loans = risky_loans_raw
safe_loans = safe_loans_raw.sample(percentage, seed=1)
loans_data = risky_loans_raw.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
Explanation: Subsample dataset to make sure classes are balanced
Just as we did in the previous assignment, we will undersample the larger class (safe loans) in order to balance out our dataset. This means we are throwing away many data points. We use seed=1 so everyone gets the same results.
End of explanation
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
Explanation: Note: There are many approaches for dealing with imbalanced data, including some where we modify the learning algorithm. These approaches are beyond the scope of this course, but some of them are reviewed in this paper. For this assignment, we use the simplest possible approach, where we subsample the overly represented class to get a more balanced dataset. In general, and especially when the data is highly imbalanced, we recommend using more advanced methods.
Transform categorical data into binary features
In this assignment, we will work with binary decision trees. Since all of our features are currently categorical features, we want to turn them into binary features using 1-hot encoding.
We can do so with the following code block (see the first assignments for more details):
End of explanation
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
Explanation: Let's see what the feature columns look like now:
End of explanation
train_data, test_data = loans_data.random_split(0.8, seed=1)
Explanation: Train-test split
We split the data into training and test sets with 80% of the data in the training set and 20% of the data in the test set. We use seed=1 so that everyone gets the same result.
End of explanation
def intermediate_node_weighted_mistakes(labels_in_node, data_weights):
# Sum the weights of all entries with label +1
total_weight_positive = sum(data_weights[labels_in_node == +1])
# Weight of mistakes for predicting all -1's is equal to the sum above
### YOUR CODE HERE
weighted_mistakes_all_negative = total_weight_positive
# Sum the weights of all entries with label -1
### YOUR CODE HERE
total_weight_negative = sum(data_weights[labels_in_node == -1])
# Weight of mistakes for predicting all +1's is equal to the sum above
### YOUR CODE HERE
weighted_mistakes_all_positive = total_weight_negative
# Return the tuple (weight, class_label) representing the lower of the two weights
# class_label should be an integer of value +1 or -1.
# If the two weights are identical, return (weighted_mistakes_all_positive,+1)
### YOUR CODE HERE
if weighted_mistakes_all_positive >= weighted_mistakes_all_negative:
return (weighted_mistakes_all_negative,-1)
else:
return (weighted_mistakes_all_positive,+1)
Explanation: Weighted decision trees
Let's modify our decision tree code from Module 5 to support weighting of individual data points.
Weighted error definition
Consider a model with $N$ data points with:
* Predictions $\hat{y}_1 ... \hat{y}_n$
* Target $y_1 ... y_n$
* Data point weights $\alpha_1 ... \alpha_n$.
Then the weighted error is defined by:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i}
$$
where $1[y_i \neq \hat{y_i}]$ is an indicator function that is set to $1$ if $y_i \neq \hat{y_i}$.
Write a function to compute weight of mistakes
Write a function that calculates the weight of mistakes for making the "weighted-majority" predictions for a dataset. The function accepts two inputs:
* labels_in_node: Targets $y_1 ... y_n$
* data_weights: Data point weights $\alpha_1 ... \alpha_n$
We are interested in computing the (total) weight of mistakes, i.e.
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}].
$$
This quantity is analogous to the number of mistakes, except that each mistake now carries different weight. It is related to the weighted error in the following way:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}
$$
The function intermediate_node_weighted_mistakes should first compute two weights:
* $\mathrm{WM}{-1}$: weight of mistakes when all predictions are $\hat{y}_i = -1$ i.e $\mathrm{WM}(\mathbf{\alpha}, \mathbf{-1}$)
* $\mathrm{WM}{+1}$: weight of mistakes when all predictions are $\hat{y}_i = +1$ i.e $\mbox{WM}(\mathbf{\alpha}, \mathbf{+1}$)
where $\mathbf{-1}$ and $\mathbf{+1}$ are vectors where all values are -1 and +1 respectively.
After computing $\mathrm{WM}{-1}$ and $\mathrm{WM}{+1}$, the function intermediate_node_weighted_mistakes should return the lower of the two weights of mistakes, along with the class associated with that weight. We have provided a skeleton for you with YOUR CODE HERE to be filled in several places.
End of explanation
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
example_data_weights = graphlab.SArray([1., 2., .5, 1., 1.])
if intermediate_node_weighted_mistakes(example_labels, example_data_weights) == (2.5, -1):
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: Checkpoint: Test your intermediate_node_weighted_mistakes function, run the following cell:
End of explanation
# If the data is identical in each feature, this function should return None
def best_splitting_feature(data, features, target, data_weights):
# These variables will keep track of the best feature and the corresponding error
best_feature = None
best_error = float('+inf')
num_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
# The right split will have all data points where the feature value is 1
left_split = data[data[feature] == 0]
right_split = data[data[feature] == 1]
# Apply the same filtering to data_weights to create left_data_weights, right_data_weights
## YOUR CODE HERE
left_data_weights = data_weights[data[feature] == 0]
right_data_weights = data_weights[data[feature] == 1]
# DIFFERENT HERE
# Calculate the weight of mistakes for left and right sides
## YOUR CODE HERE
left_weighted_mistakes, left_class = intermediate_node_weighted_mistakes(left_split[target], left_data_weights)
right_weighted_mistakes, right_class = intermediate_node_weighted_mistakes(right_split[target], right_data_weights)
# DIFFERENT HERE
# Compute weighted error by computing
# ( [weight of mistakes (left)] + [weight of mistakes (right)] ) / [total weight of all data points]
## YOUR CODE HERE
error = (left_weighted_mistakes + right_weighted_mistakes) / (sum(left_data_weights) + sum(right_data_weights))
# If this is the best error we have found so far, store the feature and the error
if error < best_error:
best_feature = feature
best_error = error
# Return the best feature we found
return best_feature
Explanation: Recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
Quiz Question: If we set the weights $\mathbf{\alpha} = 1$ for all data points, how is the weight of mistakes $\mbox{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ related to the classification error?
Function to pick best feature to split on
We continue modifying our decision tree code from the earlier assignment to incorporate weighting of individual data points. The next step is to pick the best feature to split on.
The best_splitting_feature function is similar to the one from the earlier assignment with two minor modifications:
1. The function best_splitting_feature should now accept an extra parameter data_weights to take account of weights of data points.
2. Instead of computing the number of mistakes in the left and right side of the split, we compute the weight of mistakes for both sides, add up the two weights, and divide it by the total weight of the data.
Complete the following function. Comments starting with DIFFERENT HERE mark the sections where the weighted version differs from the original implementation.
End of explanation
example_data_weights = graphlab.SArray(len(train_data)* [1.5])
if best_splitting_feature(train_data, features, target, example_data_weights) == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
Explanation: Checkpoint: Now, we have another checkpoint to make sure you are on the right track.
End of explanation
def create_leaf(target_values, data_weights):
# Create a leaf node
leaf = {'splitting_feature' : None,
'is_leaf': True}
# Computed weight of mistakes.
weighted_error, best_class = intermediate_node_weighted_mistakes(target_values, data_weights)
# Store the predicted class (1 or -1) in leaf['prediction']
leaf['prediction'] = best_class
return leaf
Explanation: Note. If you get an exception in the line of "the logical filter has different size than the array", try upgradting your GraphLab Create installation to 1.8.3 or newer.
Very Optional. Relationship between weighted error and weight of mistakes
By definition, the weighted error is the weight of mistakes divided by the weight of all data points, so
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}}) = \frac{\sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]}{\sum_{i=1}^{n} \alpha_i} = \frac{\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\sum_{i=1}^{n} \alpha_i}.
$$
In the code above, we obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$ from the two weights of mistakes from both sides, $\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}})$ and $\mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})$. First, notice that the overall weight of mistakes $\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})$ can be broken into two weights of mistakes over either side of the split:
$$
\mathrm{WM}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \sum_{i=1}^{n} \alpha_i \times 1[y_i \neq \hat{y_i}]
= \sum_{\mathrm{left}} \alpha_i \times 1[y_i \neq \hat{y_i}]
+ \sum_{\mathrm{right}} \alpha_i \times 1[y_i \neq \hat{y_i}]\
= \mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})
$$
We then divide through by the total weight of all data points to obtain $\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})$:
$$
\mathrm{E}(\mathbf{\alpha}, \mathbf{\hat{y}})
= \frac{\mathrm{WM}(\mathbf{\alpha}{\mathrm{left}}, \mathbf{\hat{y}}{\mathrm{left}}) + \mathrm{WM}(\mathbf{\alpha}{\mathrm{right}}, \mathbf{\hat{y}}{\mathrm{right}})}{\sum_{i=1}^{n} \alpha_i}
$$
Building the tree
With the above functions implemented correctly, we are now ready to build our decision tree. Recall from the previous assignments that each node in the decision tree is represented as a dictionary which contains the following keys:
{
'is_leaf' : True/False.
'prediction' : Prediction at the leaf node.
'left' : (dictionary corresponding to the left tree).
'right' : (dictionary corresponding to the right tree).
'features_remaining' : List of features that are posible splits.
}
Let us start with a function that creates a leaf node given a set of target values:
End of explanation
def weighted_decision_tree_create(data, features, target, data_weights, current_depth = 1, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1. Error is 0.
if intermediate_node_weighted_mistakes(target_values, data_weights)[0] <= 1e-15:
print "Stopping condition 1 reached."
return create_leaf(target_values, data_weights)
# Stopping condition 2. No more features.
if remaining_features == []:
print "Stopping condition 2 reached."
return create_leaf(target_values, data_weights)
# Additional stopping condition (limit tree depth)
if current_depth > max_depth:
print "Reached maximum depth. Stopping for now."
return create_leaf(target_values, data_weights)
# If all the datapoints are the same, splitting_feature will be None. Create a leaf
splitting_feature = best_splitting_feature(data, features, target, data_weights)
remaining_features.remove(splitting_feature)
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
left_data_weights = data_weights[data[splitting_feature] == 0]
right_data_weights = data_weights[data[splitting_feature] == 1]
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target], data_weights)
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target], data_weights)
# Repeat (recurse) on left and right subtrees
left_tree = weighted_decision_tree_create(
left_split, remaining_features, target, left_data_weights, current_depth + 1, max_depth)
right_tree = weighted_decision_tree_create(
right_split, remaining_features, target, right_data_weights, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
Explanation: We provide a function that learns a weighted decision tree recursively and implements 3 stopping conditions:
1. All data points in a node are from the same class.
2. No more features to split on.
3. Stop growing the tree when the tree depth reaches max_depth.
End of explanation
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
Explanation: Here is a recursive function to count the nodes in your tree:
End of explanation
example_data_weights = graphlab.SArray([1.0 for i in range(len(train_data))])
small_data_decision_tree = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
if count_nodes(small_data_decision_tree) == 7:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found:', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there: 7'
Explanation: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
End of explanation
small_data_decision_tree
Explanation: Let us take a quick look at what the trained tree is like. You should get something that looks like the following
{'is_leaf': False,
'left': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'splitting_feature': 'grade.A'
},
'prediction': None,
'right': {'is_leaf': False,
'left': {'is_leaf': True, 'prediction': 1, 'splitting_feature': None},
'prediction': None,
'right': {'is_leaf': True, 'prediction': -1, 'splitting_feature': None},
'splitting_feature': 'grade.D'
},
'splitting_feature': 'term. 36 months'
}
End of explanation
def classify(tree, x, annotate = False):
# If the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# Split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
Explanation: Making predictions with a weighted decision tree
We give you a function that classifies one data point. It can also return the probability if you want to play around with that as well.
End of explanation
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error
return (prediction != data[target]).sum() / float(len(data))
evaluate_classification_error(small_data_decision_tree, test_data)
Explanation: Evaluating the tree
Now, we will write a function to evaluate a decision tree by computing the classification error of the tree on the given dataset.
Again, recall that the classification error is defined as follows:
$$
\mbox{classification error} = \frac{\mbox{# mistakes}}{\mbox{# all data points}}
$$
The function called evaluate_classification_error takes in as input:
1. tree (as described above)
2. data (an SFrame)
The function does not change because of adding data point weights.
End of explanation
# Assign weights
example_data_weights = graphlab.SArray([1.] * 10 + [0.]*(len(train_data) - 20) + [1.] * 10)
# Train a weighted decision tree model.
small_data_decision_tree_subset_20 = weighted_decision_tree_create(train_data, features, target,
example_data_weights, max_depth=2)
Explanation: Example: Training a weighted decision tree
To build intuition on how weighted data points affect the tree being built, consider the following:
Suppose we only care about making good predictions for the first 10 and last 10 items in train_data, we assign weights:
* 1 to the last 10 items
* 1 to the first 10 items
* and 0 to the rest.
Let us fit a weighted decision tree with max_depth = 2.
End of explanation
subset_20 = train_data.head(10).append(train_data.tail(10))
evaluate_classification_error(small_data_decision_tree_subset_20, subset_20)
Explanation: Now, we will compute the classification error on the subset_20, i.e. the subset of data points whose weight is 1 (namely the first and last 10 data points).
End of explanation
evaluate_classification_error(small_data_decision_tree_subset_20, train_data)
Explanation: Now, let us compare the classification error of the model small_data_decision_tree_subset_20 on the entire test set train_data:
End of explanation
from math import log
from math import exp
def adaboost_with_tree_stumps(data, features, target, num_tree_stumps):
# start with unweighted data
alpha = graphlab.SArray([1.]*len(data))
weights = []
tree_stumps = []
target_values = data[target]
for t in xrange(num_tree_stumps):
print '====================================================='
print 'Adaboost Iteration %d' % t
print '====================================================='
# Learn a weighted decision tree stump. Use max_depth=1
tree_stump = weighted_decision_tree_create(data, features, target, data_weights=alpha, max_depth=1)
tree_stumps.append(tree_stump)
# Make predictions
predictions = data.apply(lambda x: classify(tree_stump, x))
# Produce a Boolean array indicating whether
# each data point was correctly classified
is_correct = predictions == target_values
is_wrong = predictions != target_values
# Compute weighted error
# YOUR CODE HERE
weighted_error = sum(alpha[is_wrong]) / sum(alpha)
# Compute model coefficient using weighted error
# YOUR CODE HERE
weight = 0.5*log((1 - weighted_error) / weighted_error)
weights.append(weight)
# Adjust weights on data point
adjustment = is_correct.apply(lambda is_correct : exp(-weight) if is_correct else exp(weight))
# Scale alpha by multiplying by adjustment
# Then normalize data points weights
## YOUR CODE HERE
alpha = alpha * adjustment
alpha = alpha/sum(alpha)
return weights, tree_stumps
Explanation: The model small_data_decision_tree_subset_20 performs a lot better on subset_20 than on train_data.
So, what does this mean?
* The points with higher weights are the ones that are more important during the training process of the weighted decision tree.
* The points with zero weights are basically ignored during training.
Quiz Question: Will you get the same model as small_data_decision_tree_subset_20 if you trained a decision tree with only the 20 data points with non-zero weights from the set of points in subset_20?
Implementing your own Adaboost (on decision stumps)
Now that we have a weighted decision tree working, it takes only a bit of work to implement Adaboost. For the sake of simplicity, let us stick with decision tree stumps by training trees with max_depth=1.
Recall from the lecture the procedure for Adaboost:
1. Start with unweighted data with $\alpha_j = 1$
2. For t = 1,...T:
* Learn $f_t(x)$ with data weights $\alpha_j$
* Compute coefficient $\hat{w}t$:
$$\hat{w}_t = \frac{1}{2}\ln{\left(\frac{1- \mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}{\mbox{E}(\mathbf{\alpha}, \mathbf{\hat{y}})}\right)}$$
* Re-compute weights $\alpha_j$:
$$\alpha_j \gets \begin{cases}
\alpha_j \exp{(-\hat{w}_t)} & \text{ if }f_t(x_j) = y_j\
\alpha_j \exp{(\hat{w}_t)} & \text{ if }f_t(x_j) \neq y_j
\end{cases}$$
* Normalize weights $\alpha_j$:
$$\alpha_j \gets \frac{\alpha_j}{\sum{i=1}^{N}{\alpha_i}} $$
Complete the skeleton for the following code to implement adaboost_with_tree_stumps. Fill in the places with YOUR CODE HERE.
End of explanation
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features, target, num_tree_stumps=2)
def print_stump(tree):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' root'
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0]{1}[{0} == 1] '.format(split_name, ' '*(27-len(split_name)))
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
Explanation: Checking your Adaboost code
Train an ensemble of two tree stumps and see which features those stumps split on. We will run the algorithm with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 2
End of explanation
print_stump(tree_stumps[0])
Explanation: Here is what the first stump looks like:
End of explanation
print_stump(tree_stumps[1])
print stump_weights
Explanation: Here is what the next stump looks like:
End of explanation
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data, features,
target, num_tree_stumps=10)
Explanation: If your Adaboost is correctly implemented, the following things should be true:
tree_stumps[0] should split on term. 36 months with the prediction -1 on the left and +1 on the right.
tree_stumps[1] should split on grade.A with the prediction -1 on the left and +1 on the right.
Weights should be approximately [0.158, 0.177]
Reminders
- Stump weights ($\mathbf{\hat{w}}$) and data point weights ($\mathbf{\alpha}$) are two different concepts.
- Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
- Data point weights ($\mathbf{\alpha}$) tell you how important each data point is while training a decision stump.
Training a boosted ensemble of 10 stumps
Let us train an ensemble of 10 decision tree stumps with Adaboost. We run the adaboost_with_tree_stumps function with the following parameters:
* train_data
* features
* target
* num_tree_stumps = 10
End of explanation
def predict_adaboost(stump_weights, tree_stumps, data):
scores = graphlab.SArray([0.]*len(data))
for i, tree_stump in enumerate(tree_stumps):
predictions = data.apply(lambda x: classify(tree_stump, x))
# Accumulate predictions on scores array
# YOUR CODE HERE
scores = scores + (predictions * stump_weights[i])
return scores.apply(lambda score : +1 if score > 0 else -1)
predictions = predict_adaboost(stump_weights, tree_stumps, test_data)
accuracy = graphlab.evaluation.accuracy(test_data[target], predictions)
print 'Accuracy of 10-component ensemble = %s' % accuracy
Explanation: Making predictions
Recall from the lecture that in order to make predictions, we use the following formula:
$$
\hat{y} = sign\left(\sum_{t=1}^T \hat{w}_t f_t(x)\right)
$$
We need to do the following things:
- Compute the predictions $f_t(x)$ using the $t$-th decision tree
- Compute $\hat{w}_t f_t(x)$ by multiplying the stump_weights with the predictions $f_t(x)$ from the decision trees
- Sum the weighted predictions over each stump in the ensemble.
Complete the following skeleton for making predictions:
End of explanation
stump_weights
Explanation: Now, let us take a quick look what the stump_weights look like at the end of each iteration of the 10-stump ensemble:
End of explanation
# this may take a while...
stump_weights, tree_stumps = adaboost_with_tree_stumps(train_data,
features, target, num_tree_stumps=30)
Explanation: Quiz Question: Are the weights monotonically decreasing, monotonically increasing, or neither?
Reminder: Stump weights ($\mathbf{\hat{w}}$) tell you how important each stump is while making predictions with the entire boosted ensemble.
Performance plots
In this section, we will try to reproduce some of the performance plots dicussed in the lecture.
How does accuracy change with adding stumps to the ensemble?
We will now train an ensemble with:
* train_data
* features
* target
* num_tree_stumps = 30
Once we are done with this, we will then do the following:
* Compute the classification error at the end of each iteration.
* Plot a curve of classification error vs iteration.
First, lets train the model.
End of explanation
error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], train_data)
error = 1.0 - graphlab.evaluation.accuracy(train_data[target], predictions)
error_all.append(error)
print "Iteration %s, training error = %s" % (n, error_all[n-1])
Explanation: Computing training error at the end of each iteration
Now, we will compute the classification error on the train_data and see how it is reduced as trees are added.
End of explanation
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.legend(loc='best', prop={'size':15})
plt.rcParams.update({'font.size': 16})
Explanation: Visualizing training error vs number of iterations
We have provided you with a simple code snippet that plots classification error with the number of iterations.
End of explanation
test_error_all = []
for n in xrange(1, 31):
predictions = predict_adaboost(stump_weights[:n], tree_stumps[:n], test_data)
error = 1.0 - graphlab.evaluation.accuracy(test_data[target], predictions)
test_error_all.append(error)
print "Iteration %s, test error = %s" % (n, test_error_all[n-1])
Explanation: Quiz Question: Which of the following best describes a general trend in accuracy as we add more and more components? Answer based on the 30 components learned so far.
Training error goes down monotonically, i.e. the training error reduces with each iteration but never increases.
Training error goes down in general, with some ups and downs in the middle.
Training error goes up in general, with some ups and downs in the middle.
Training error goes down in the beginning, achieves the best error, and then goes up sharply.
None of the above
Evaluation on the test data
Performing well on the training data is cheating, so lets make sure it works on the test_data as well. Here, we will compute the classification error on the test_data at the end of each iteration.
End of explanation
plt.rcParams['figure.figsize'] = 7, 5
plt.plot(range(1,31), error_all, '-', linewidth=4.0, label='Training error')
plt.plot(range(1,31), test_error_all, '-', linewidth=4.0, label='Test error')
plt.title('Performance of Adaboost ensemble')
plt.xlabel('# of iterations')
plt.ylabel('Classification error')
plt.rcParams.update({'font.size': 16})
plt.legend(loc='best', prop={'size':15})
plt.tight_layout()
Explanation: Visualize both the training and test errors
Now, let us plot the training & test error with the number of iterations.
End of explanation |
8,892 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WMI Eventing
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Look for WMI event filters registered
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Look for WMI event consumers registered
| Data source | Event Provider | Relationship | Event |
|
Step4: Analytic III
Look for WMI consumers binding to filters
| Data source | Event Provider | Relationship | Event |
|
Step5: Analytic IV
Look for events related to the registration of FilterToConsumerBinding
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: WMI Eventing
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/08/10 |
| modification date | 2020/09/20 |
| playbook related | [] |
Hypothesis
Adversaries might be leveraging WMI eventing for persistence in my environment.
Technical Context
WMI is the Microsoft implementation of the Web-Based Enterprise Management (WBEM) and Common Information Model (CIM). Both standards aim to provide an industry-agnostic means of collecting and transmitting information related to any managed component in an enterprise.
An example of a managed component in WMI would be a running process, registry key, installed service, file information, etc.
At a high level, Microsoft implementation of these standards can be summarized as follows > Managed Components Managed components are represented as WMI objects — class instances representing highly structured operating system data. Microsoft provides a wealth of WMI objects that communicate information related to the operating system. E.g. Win32_Process, Win32_Service, AntiVirusProduct, Win32_StartupCommand, etc.
Offensive Tradecraft
From an offensive perspective WMI has the ability to trigger off nearly any conceivable event, making it a good technique for persistence.
Three requirements
* Filter - An action to trigger off of
* Consumer - An action to take upon triggering the filter
* Binding - Registers a FilterConsumer
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/persistence/SDWIN-190518184306.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/persistence/host/empire_wmi_local_event_subscriptions_elevated_user.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, EventNamespace, Name, Query
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 19
'''
)
df.show(10,False)
Explanation: Analytic I
Look for WMI event filters registered
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi filter | 19 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Name, Type, Destination
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 20
'''
)
df.show(10,False)
Explanation: Analytic II
Look for WMI event consumers registered
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi consumer | 20 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, User, Operation, Consumer, Filter
FROM sdTable
WHERE Channel = "Microsoft-Windows-Sysmon/Operational"
AND EventID = 21
'''
)
df.show(10,False)
Explanation: Analytic III
Look for WMI consumers binding to filters
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-Sysmon/Operational | User created Wmi subscription | 21 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, Message
FROM sdTable
WHERE Channel = "Microsoft-Windows-WMI-Activity/Operational"
AND EventID = 5861
'''
)
df.show(10,False)
Explanation: Analytic IV
Look for events related to the registration of FilterToConsumerBinding
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| WMI object | Microsoft-Windows-WMI-Activity/Operational | Wmi subscription created | 5861 |
End of explanation |
8,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Always start by import everything in a separate code block. That way if you forgot stuff, it's easy to just add and re-run without it actually doing anything.
Step1: Linear Regression
In the class you've learned the concept of the Linear Regression. In this disccussion we will see how we can use the mltools package to fit, evaluate and visualize the model.
The code for the linear regression model sits in ml.linear.lnearRegress.
Example 1
Step2: Now let's create a test and train data out of it.
Step3: This error is a result of some assumptions in the code. All the mltools package code assumes that the X is a 2d array and not 1d. This is a common assumption and it is also used in more popular packages.
There are many ways to convert from 1d to 2d. The most popular is the atleast_2d
Step4: Another option is to use reshape. Look at the documentation to see what's the -1 is all about.
Step5: Notice that I transformed it after the atleast2d call. That's because it is common to think of X where the rows are the points and the columns are the dimensions. Please play around with those methods to make sure you understand what it's doing.
Now let's continue from where we stopped.
Step6: Now let's see how we can call the linear regression.
Step7: Boom, that's it. But you should go into the code and make sure you understand how it works. You will be asked in exams to derive a linear regression.
Plotting the regression line
Step8: We can also print the learned regression object. This will show us the coefficients for each feature. Notice that the regression model added a constant for us.
Step9: The print above means that the linear regression learned the function Y = 2 + 1 * X.
Example 2
Step10: Now let's repeate everything on the real data.
Step11: Meh, the predicions don't look that great. Why is that?
(Because we're fitting Y=X+c line where it's clear that this data comes from a more complex model.)
So let's fit a more complex model. For that we can use the ml.transform.fpoly method that will convert the features for us.
Step13: Feel free to play around with different degrees and see the differences. You should!
Measuring Prediction Accuracy
In the HW assignment you are required to measure the prediction error using MSE and plot it for different degrees.
Step14: Adding the predicted Yhat to the plot. Notice that it sits on the regression line (as expected).
Step15: Computing the MSE for the different degrees.
Step16: Cross Validation
Let’s now imagine that we do not have access to the target values of the test data we held out in the previous problem, and we wanted to decide on the best polynomial degree.
Cross-validation works by creating many training/validation splits, called folds, and using all of these splits to assess the “out-of-sample” (validation) performance by averaging them. | Python Code:
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import mltools as ml
np.random.seed(0)
%matplotlib inline
Explanation: Always start by import everything in a separate code block. That way if you forgot stuff, it's easy to just add and re-run without it actually doing anything.
End of explanation
# First we create the "fake data" with xs from 0 to 10 and Y = X + 2
X = np.linspace(0, 10, 50)
Y = np.copy(X) + 2
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(X, Y, s=80, color='blue', alpha=0.75)
ax.set_xlim(-0.2, 10.2)
ax.set_ylim(1.8, 12.2)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
plt.show()
Explanation: Linear Regression
In the class you've learned the concept of the Linear Regression. In this disccussion we will see how we can use the mltools package to fit, evaluate and visualize the model.
The code for the linear regression model sits in ml.linear.lnearRegress.
Example 1: Simple Slope
Starting with a simple example where the linear regression model has to fit a simple slope line.
End of explanation
X, Y = ml.shuffleData(X, Y)
Explanation: Now let's create a test and train data out of it.
End of explanation
_ = np.atleast_2d(X).T
Explanation: This error is a result of some assumptions in the code. All the mltools package code assumes that the X is a 2d array and not 1d. This is a common assumption and it is also used in more popular packages.
There are many ways to convert from 1d to 2d. The most popular is the atleast_2d
End of explanation
X = X.reshape(-1, 1)
Explanation: Another option is to use reshape. Look at the documentation to see what's the -1 is all about.
End of explanation
X, Y = ml.shuffleData(X, Y)
Xtr, Xte, Ytr, Yte = ml.splitData(X, Y, 0.75)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
ax.set_xlim(-0.2, 10.2)
ax.set_ylim(1.8, 12.2)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=4)
plt.show()
Explanation: Notice that I transformed it after the atleast2d call. That's because it is common to think of X where the rows are the points and the columns are the dimensions. Please play around with those methods to make sure you understand what it's doing.
Now let's continue from where we stopped.
End of explanation
lr = ml.linear.linearRegress(Xtr, Ytr)
Explanation: Now let's see how we can call the linear regression.
End of explanation
# We start with creating a set of xs on the space we want to predict for.
xs = np.linspace(0, 10, 200)
# Converting to the rate shape
xs = np.atleast_2d(xs).T
# And now the prediction
ys = lr.predict(xs)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
# Also plotting the regression line
ax.plot(xs, ys, lw=3, color='black', alpha=0.75, label='Prediction')
ax.set_xlim(-0.2, 10.2)
ax.set_ylim(1.8, 12.2)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=4)
plt.show()
Explanation: Boom, that's it. But you should go into the code and make sure you understand how it works. You will be asked in exams to derive a linear regression.
Plotting the regression line
End of explanation
print lr
Explanation: We can also print the learned regression object. This will show us the coefficients for each feature. Notice that the regression model added a constant for us.
End of explanation
path_to_file = 'poly_data.txt'
data = np.genfromtxt(path_to_file, delimiter='\t') # Read data from file
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(data[:, 0], data[:, 1], s=80, color='blue', alpha=0.75)
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
plt.show()
Explanation: The print above means that the linear regression learned the function Y = 2 + 1 * X.
Example 2: Real Data
That was a toy example, let's look at how this is done on real data. This is what you'll have to do in the HW assignment using the 'curve80.txt' data. We're not going to spoile it here for you so we're going to use a different data set.
End of explanation
X, Y = np.atleast_2d(data[:, 0]).T, data[:, 1]
X, Y = ml.shuffleData(X, Y)
Xtr, Xte, Ytr, Yte = ml.splitData(X, Y, 0.75)
lr = ml.linear.linearRegress(Xtr, Ytr)
# Make sure you use the currect space.
xs = np.linspace(0, 4.2, 200)
xs = np.atleast_2d(xs).T
ys = lr.predict(xs)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
# Also plotting the regression line
ax.plot(xs, ys, lw=3, color='black', alpha=0.75, label='Prediction')
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=0)
plt.show()
Explanation: Now let's repeate everything on the real data.
End of explanation
degree = 12
XtrP = ml.transforms.fpoly(Xtr, degree, False)
lr = ml.linear.linearRegress(XtrP, Ytr)
# Make sure you use the currect space.
xs = np.linspace(0, 4.2, 200)
xs = np.atleast_2d(xs).T
# Notice that we have to transform the predicting xs too.
xsP = ml.transforms.fpoly(xs, degree, False)
ys = lr.predict(xsP)
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
# Also plotting the regression line. in the plotting we plot the xs and not the xsP
ax.plot(xs, ys, lw=3, color='black', alpha=0.75, label='Prediction')
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=30, loc=0)
plt.show()
Explanation: Meh, the predicions don't look that great. Why is that?
(Because we're fitting Y=X+c line where it's clear that this data comes from a more complex model.)
So let's fit a more complex model. For that we can use the ml.transform.fpoly method that will convert the features for us.
End of explanation
def MSE(y_true, y_hat):
Mock MSE method.
You'll have to fill it in yourself with the true way of computing the MSE.
return np.random.rand() * 1000
# Predicting on the test data - DO NOT FORGET TO TRANSFORM Xte TOO!!!
XteP = ml.transforms.fpoly(Xte, degree, False)
YteHat = lr.predict(XteP)
Explanation: Feel free to play around with different degrees and see the differences. You should!
Measuring Prediction Accuracy
In the HW assignment you are required to measure the prediction error using MSE and plot it for different degrees.
End of explanation
# Plotting the data
f, ax = plt.subplots(1, 1, figsize=(10, 8))
ax.scatter(Xtr, Ytr, s=80, color='blue', alpha=0.75, label='Train')
ax.scatter(Xte, Yte, s=240, marker='*', color='red', alpha=0.75, label='Test')
ax.scatter(Xte, YteHat, s=80, marker='D', color='forestgreen', alpha=0.75, label='Yhat')
# Also plotting the regression line. in the plotting we plot the xs and not the xsP
ax.plot(xs, ys, lw=3, color='black', alpha=0.75, label='Prediction')
ax.set_xlim(-0.2, 4.3)
ax.set_ylim(-13, 18)
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
# Controlling the size of the legend and the location.
ax.legend(fontsize=20, loc=0)
plt.show()
Explanation: Adding the predicted Yhat to the plot. Notice that it sits on the regression line (as expected).
End of explanation
degrees = np.array([2, 4, 6, 8, 10, 20])
mse_error = np.zeros(degrees.shape[0])
for i, degree in enumerate(degrees):
XtrP = ml.transforms.fpoly(Xtr, degree, False)
lr = ml.linear.linearRegress(XtrP, Ytr)
XteP = ml.transforms.fpoly(Xte, degree, False)
YteHat = lr.predict(XteP)
mse_error[i] = MSE(Yte, YteHat)
f, ax = plt.subplots(1, 1, figsize=(10, 8))
# Plotting a line with markers where there's an actual x value.
ax.semilogy(degrees, mse_error, lw=4, marker='d', markersize=20, alpha=0.75, label='MSE ERROR')
ax.set_xlim(1.2, 20.5)
ax.set_ylim(30, 1100)
# Setting the X-ticks manually.
ax.set_xticks(np.arange(2, 21, 2))
ax.set_xticklabels(ax.get_xticks(), fontsize=25)
ax.set_yticklabels(ax.get_yticks(), fontsize=25)
ax.legend(fontsize=20, loc=0)
plt.show()
Explanation: Computing the MSE for the different degrees.
End of explanation
nFolds = 4
f, ax = plt.subplots(2, 2, figsize=(20, 20))
ax = ax.flatten()
for iFold in range(nFolds):
Xti, Xvi, Yti, Yvi = ml.crossValidate(Xtr, Ytr, nFolds, iFold)
ax[iFold].scatter(Xti, Yti, s=80, color='blue', alpha=0.75, label='Train')
ax[iFold].scatter(Xvi, Yvi, s=240, marker='*', color='red', alpha=0.75, label='Test')
ax[iFold].set_xlim(-0.2, 4.3)
ax[iFold].set_ylim(-13, 18)
ax[iFold].set_xticklabels(ax[iFold].get_xticks(), fontsize=25)
ax[iFold].set_yticklabels(ax[iFold].get_yticks(), fontsize=25)
plt.show()
Explanation: Cross Validation
Let’s now imagine that we do not have access to the target values of the test data we held out in the previous problem, and we wanted to decide on the best polynomial degree.
Cross-validation works by creating many training/validation splits, called folds, and using all of these splits to assess the “out-of-sample” (validation) performance by averaging them.
End of explanation |
8,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step12: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
Step19: Sequential container
Define a forward and backward pass procedures.
Step21: Layers
input
Step22: This one is probably the hardest but as others only takes 5 lines of code in total.
- input
Step23: Implement dropout. The idea and implementation is really simple
Step24: Activation functions
Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU)
Step25: Implement Leaky Rectified Linear Unit. Expriment with slope.
Step31: Criterions
Criterions are used to score the models answers.
Step32: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
Step33: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size. | Python Code:
class Module(object):
def __init__ (self):
self.output = None
self.gradInput = None
self.training = True
Basically, you can think of a module as of a something (black box)
which can process `input` data and produce `ouput` data.
This is like applying a function which is called `forward`:
output = module.forward(input)
The module should be able to perform a backward pass: to differentiate the `forward` function.
More, it should be able to differentiate it if is a part of chain (chain rule).
The latter implies there is a gradient from previous step of a chain rule.
gradInput = module.backward(input, gradOutput)
def forward(self, input):
Takes an input object, and computes the corresponding output of the module.
return self.updateOutput(input)
def backward(self,input, gradOutput):
Performs a backpropagation step through the module, with respect to the given input.
This includes
- computing a gradient w.r.t. `input` (is needed for further backprop),
- computing a gradient w.r.t. parameters (to update parameters while optimizing).
self.updateGradInput(input, gradOutput)
self.accGradParameters(input, gradOutput)
return self.gradInput
def updateOutput(self, input):
Computes the output using the current parameter set of the class and input.
This function returns the result which is stored in the `output` field.
Make sure to both store the data in `output` field and return it.
# The easiest case:
# self.output = input
# return self.output
pass
def updateGradInput(self, input, gradOutput):
Computing the gradient of the module with respect to its own input.
This is returned in `gradInput`. Also, the `gradInput` state variable is updated accordingly.
The shape of `gradInput` is always the same as the shape of `input`.
Make sure to both store the gradients in `gradInput` field and return it.
# The easiest case:
# self.gradInput = gradOutput
# return self.gradInput
pass
def accGradParameters(self, input, gradOutput):
Computing the gradient of the module with respect to its own parameters.
No need to override if module has no parameters (e.g. ReLU).
pass
def zeroGradParameters(self):
Zeroes `gradParams` variable if the module has params.
pass
def getParameters(self):
Returns a list with its parameters.
If the module does not have parameters return empty list.
return []
def getGradParameters(self):
Returns a list with gradients with respect to its parameters.
If the module does not have parameters return empty list.
return []
def training(self):
Sets training mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = True
def evaluate(self):
Sets evaluation mode for the module.
Training and testing behaviour differs for Dropout, BatchNorm.
self.training = False
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Module"
Explanation: Module is an abstract class which defines fundamental methods necessary for a training a neural network. You do not need to change anything here, just read the comments.
End of explanation
class Sequential(Module):
This class implements a container, which processes `input` data sequentially.
`input` is processed by each module (layer) in self.modules consecutively.
The resulting array is called `output`.
def __init__ (self):
super(Sequential, self).__init__()
self.modules = []
def add(self, module):
Adds a module to the container.
self.modules.append(module)
def updateOutput(self, input):
Basic workflow of FORWARD PASS:
y_0 = module[0].forward(input)
y_1 = module[1].forward(y_0)
...
output = module[n-1].forward(y_{n-2})
Just write a little loop.
# Your code goes here. ################################################
self.y = [np.array(input)]
for module in self.modules:
self.y.append(module.forward(self.y[-1]))
self.y.pop(0)
self.output = self.y[-1]
return self.output
def backward(self, input, gradOutput):
Workflow of BACKWARD PASS:
g_{n-1} = module[n-1].backward(y_{n-2}, gradOutput)
g_{n-2} = module[n-2].backward(y_{n-3}, g_{n-1})
...
g_1 = module[1].backward(y_0, g_2)
gradInput = module[0].backward(input, g_1)
!!!
To ech module you need to provide the input, module saw while forward pass,
it is used while computing gradients.
Make sure that the input for `i-th` layer the output of `module[i]` (just the same input as in forward pass)
and NOT `input` to this Sequential module.
!!!
# Your code goes here. ################################################
g = np.array(gradOutput)
self.y = [np.array(input)] + self.y
for i, module in enumerate(reversed(self.modules)):
g = module.backward(self.y[-i - 2], g)
self.gradInput = g
self.y.pop(0)
return self.gradInput
def zeroGradParameters(self):
for module in self.modules:
module.zeroGradParameters()
def getParameters(self):
Should gather all parameters in a list.
return [x.getParameters() for x in self.modules]
def getGradParameters(self):
Should gather all gradients w.r.t parameters in a list.
return [x.getGradParameters() for x in self.modules]
def __repr__(self):
string = "".join([str(x) + '\n' for x in self.modules])
return string
def __getitem__(self,x):
return self.modules.__getitem__(x)
Explanation: Sequential container
Define a forward and backward pass procedures.
End of explanation
class Linear(Module):
A module which applies a linear transformation
A common name is fully-connected layer, InnerProductLayer in caffe.
The module should work with 2D input of shape (n_samples, n_feature).
def __init__(self, n_in, n_out):
super(Linear, self).__init__()
# This is a nice initialization
stdv = 1./np.sqrt(n_in)
self.W = np.random.uniform(-stdv, stdv, size = (n_out, n_in))
self.b = np.random.uniform(-stdv, stdv, size = n_out)
self.gradW = np.zeros_like(self.W)
self.gradb = np.zeros_like(self.b)
def updateOutput(self, input):
# Your code goes here. ################################################
self.output = np.add(input.dot(self.W.T), self.b)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = gradOutput.dot(self.W)
return self.gradInput
def accGradParameters(self, input, gradOutput):
# Your code goes here. ################################################
self.gradW = gradOutput.T.dot(input)
self.gradb = gradOutput.sum(axis=0)
def zeroGradParameters(self):
self.gradW.fill(0)
self.gradb.fill(0)
def getParameters(self):
return [self.W, self.b]
def getGradParameters(self):
return [self.gradW, self.gradb]
def __repr__(self):
s = self.W.shape
q = 'Linear %d -> %d' %(s[1],s[0])
return q
input_dim = 3
output_dim = 2
x = np.random.randn(5, input_dim)
w = np.random.randn(output_dim, input_dim)
b = np.random.randn(output_dim)
dout = np.random.randn(5, output_dim)
linear = Linear(input_dim, output_dim)
def update_W_matrix(new_W):
linear.W = new_W
return linear.forward(x)
def update_bias(new_b):
linear.b = new_b
return linear.forward(x)
dx = linear.backward(x, dout)
dx_num = eval_numerical_gradient_array(lambda x: linear.forward(x), x, dout)
dw_num = eval_numerical_gradient_array(update_W_matrix, w, dout)
db_num = eval_numerical_gradient_array(update_bias, b, dout)
print 'Testing Linear_backward function:'
print 'dx error: ', rel_error(dx_num, dx)
print 'dw error: ', rel_error(dw_num, linear.gradW)
print 'db error: ', rel_error(db_num, linear.gradb)
Explanation: Layers
input: batch_size x n_feats1
output: batch_size x n_feats2
End of explanation
class SoftMax(Module):
def __init__(self):
super(SoftMax, self).__init__()
def updateOutput(self, input):
# start with normalization for numerical stability
self.output = np.subtract(input, input.max(axis=1, keepdims=True))
# Your code goes here. ################################################
self.output = np.exp(self.output)
out_sum = self.output.sum(axis=1, keepdims=True)
self.output = np.divide(self.output, out_sum)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
batch_size, n_feats = self.output.shape
a = self.output.reshape(batch_size, n_feats, -1)
b = self.output.reshape(batch_size, -1, n_feats)
self.gradInput = np.multiply(gradOutput.reshape(batch_size, -1, n_feats),
np.subtract(np.multiply(np.eye(n_feats), a),
np.multiply(a, b))).sum(axis=2)
return self.gradInput
def __repr__(self):
return "SoftMax"
soft_max = SoftMax()
x = np.random.randn(5, 3)
dout = np.random.randn(5, 3)
dx_numeric = eval_numerical_gradient_array(lambda x: soft_max.forward(x), x, dout)
dx = soft_max.backward(x, dout)
# The error should be around 1e-10
print 'Testing SoftMax grad:'
print 'dx error: ', rel_error(dx_numeric, dx)
Explanation: This one is probably the hardest but as others only takes 5 lines of code in total.
- input: batch_size x n_feats
- output: batch_size x n_feats
End of explanation
class Dropout(Module):
def __init__(self, p=0.5):
super(Dropout, self).__init__()
self.p = p
self.mask = None
def updateOutput(self, input):
# Your code goes here. ################################################
if self.training:
self.mask = np.random.binomial(1, self.p, size=len(input))
else:
self.mask = np.ones(len(input))
self.mask = self.mask.reshape(len(self.mask), -1)
self.output = np.multiply(input, self.mask)
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = np.multiply(gradOutput, self.mask)
return self.gradInput
def __repr__(self):
return "Dropout"
Explanation: Implement dropout. The idea and implementation is really simple: just multimply the input by $Bernoulli(p)$ mask.
This is a very cool regularizer. In fact, when you see your net is overfitting try to add more dropout.
While training (self.training == True) it should sample a mask on each iteration (for every batch). When testing this module should implement identity transform i.e. self.output = input.
input: batch_size x n_feats
output: batch_size x n_feats
End of explanation
class ReLU(Module):
def __init__(self):
super(ReLU, self).__init__()
def updateOutput(self, input):
self.output = np.maximum(input, 0)
return self.output
def updateGradInput(self, input, gradOutput):
self.gradInput = np.multiply(gradOutput , input > 0)
return self.gradInput
def __repr__(self):
return "ReLU"
Explanation: Activation functions
Here's the complete example for the Rectified Linear Unit non-linearity (aka ReLU):
End of explanation
class LeakyReLU(Module):
def __init__(self, slope = 0.03):
super(LeakyReLU, self).__init__()
self.slope = slope
def updateOutput(self, input):
# Your code goes here. ################################################
self.output = input.copy()
self.output[self.output < 0] *= self.slope
return self.output
def updateGradInput(self, input, gradOutput):
# Your code goes here. ################################################
self.gradInput = gradOutput.copy()
self.gradInput[input < 0] *= self.slope
return self.gradInput
def __repr__(self):
return "LeakyReLU"
Explanation: Implement Leaky Rectified Linear Unit. Expriment with slope.
End of explanation
class Criterion(object):
def __init__ (self):
self.output = None
self.gradInput = None
def forward(self, input, target):
Given an input and a target, compute the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateOutput`.
return self.updateOutput(input, target)
def backward(self, input, target):
Given an input and a target, compute the gradients of the loss function
associated to the criterion and return the result.
For consistency this function should not be overrided,
all the code goes in `updateGradInput`.
return self.updateGradInput(input, target)
def updateOutput(self, input, target):
Function to override.
return self.output
def updateGradInput(self, input, target):
Function to override.
return self.gradInput
def __repr__(self):
Pretty printing. Should be overrided in every module if you want
to have readable description.
return "Criterion"
Explanation: Criterions
Criterions are used to score the models answers.
End of explanation
class MSECriterion(Criterion):
def __init__(self):
super(MSECriterion, self).__init__()
def updateOutput(self, input, target):
self.output = np.sum(np.power(np.subtractact(input, target), 2)) / input.shape[0]
return self.output
def updateGradInput(self, input, target):
self.gradInput = np.subtract(input, target) * 2 / input.shape[0]
return self.gradInput
def __repr__(self):
return "MSECriterion"
Explanation: The MSECriterion, which is basic L2 norm usually used for regression, is implemented here for you.
End of explanation
class ClassNLLCriterion(Criterion):
def __init__(self):
a = super(ClassNLLCriterion, self)
super(ClassNLLCriterion, self).__init__()
def updateOutput(self, input, target):
# Use this trick to avoid numerical errors
eps = 1e-15
input_clamp = np.clip(input, eps, 1 - eps)
# Your code goes here. ################################################
self.output = -np.sum(np.multiply(target, np.log(input_clamp))) / len(input)
return self.output
def updateGradInput(self, input, target):
# Use this trick to avoid numerical errors
input_clamp = np.maximum(1e-15, np.minimum(input, 1 - 1e-15) )
# Your code goes here. ################################################
self.gradInput = np.subtract(input_clamp, target) / len(input)
return self.gradInput
def __repr__(self):
return "ClassNLLCriterion"
Explanation: You task is to implement the ClassNLLCriterion. It should implement multiclass log loss. Nevertheless there is a sum over y (target) in that formula,
remember that targets are one-hot encoded. This fact simplifies the computations a lot. Note, that criterions are the only places, where you divide by batch size.
End of explanation |
8,895 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Was trying to generate a pivot table with multiple "values" columns. I know I can use aggfunc to aggregate values the way I want to, but what if I don't want to sum or avg both columns but instead I want sum of one column while mean of the other one. So is it possible to do so using pandas? | Problem:
import pandas as pd
import numpy as np
np.random.seed(1)
df = pd.DataFrame({
'A' : ['abc', 'def', 'xyz', 'abc'] * 3,
'B' : ['A', 'B', 'C'] * 4,
'D' : np.random.randn(12),
'E' : np.random.randn(12)
})
def g(df):
return pd.pivot_table(df, values=['D','E'], index=['B'], aggfunc={'D':np.sum, 'E':np.mean})
result = g(df.copy()) |
8,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing tutor-student matching with spiking simulations
Step1: Define target motor programs
Step2: Choose target
Step6: General definitions
Step7: Create default parameters file
Step10: Generate data for figures
Learning curve (blackbox)
Step13: Learning curve blackbox, realistic target
Step16: Learning curve (blackbox), constant inhibition
Step19: Reinforcement example (0 ms)
Step22: Reinforcement example (80 ms)
Step25: Reinforcement example (alpha=10, beta=9, tau=440 ms)
Step26: Make figures
Tutor-student mismatch heatmap and convergence map -- blackbox spiking
The data for this needs to be generated using the summarize.py script from the results of the run_tscale_batch.py script, which is designed to run on a cluster.
Step27: Tutor-student bigger mismatch heatmap and convergence map -- blackbox spiking
The data for this needs to be generated using the summarize.py script from the results of the run_tscale_batch.py script, which is designed to run on a cluster.
Step28: Tutor-student mismatch heatmap and convergence map -- reinforcement
Step29: Spiking example learning curve and raster plots
Step30: Spiking example learning curve and raster plots, realistic target
Step31: Spiking example, constant inhibition, learning curve and raster plots
Step32: Reinforcement example learning curves
Reinforcement learning curve, small tau
Step33: Reinforcement learning curve, long tau
Step34: Reinforcement learning, evolution of synapse sparsity | Python Code:
%matplotlib inline
import matplotlib as mpl
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
plt.rc('text', usetex=True)
plt.rc('font', family='serif', serif='cm')
plt.rcParams['figure.titlesize'] = 10
plt.rcParams['axes.labelsize'] = 8
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['axes.labelpad'] = 3.0
from IPython.display import display, clear_output
from ipywidgets import FloatProgress
# comment out the next line if not working on a retina-display computer
import IPython
IPython.display.set_matplotlib_formats('retina')
import numpy as np
import copy
import time
import os
import cPickle as pickle
import simulation
from basic_defs import *
from helpers import *
Explanation: Testing tutor-student matching with spiking simulations
End of explanation
tmax = 600.0 # duration of motor program (ms)
dt = 0.2 # simulation timestep (ms)
nsteps = int(tmax/dt)
times = np.arange(0, tmax, dt)
# add some noise, but keep things reproducible
np.random.seed(0)
target_complex = 100.0*np.vstack((
np.convolve(np.sin(times/100 + 0.1*np.random.randn(len(times)))**6 +
np.cos(times/150 + 0.2*np.random.randn(len(times)) + np.random.randn())**4,
np.exp(-0.5*np.linspace(-3.0, 3.0, 200)**2)/np.sqrt(2*np.pi)/80, mode='same'),
np.convolve(np.sin(times/110 + 0.15*np.random.randn(len(times)) + np.pi/3)**6 +
np.cos(times/100 + 0.2*np.random.randn(len(times)) + np.random.randn())**4,
np.exp(-0.5*np.linspace(-3.0, 3.0, 200)**2)/np.sqrt(2*np.pi)/80, mode='same'),
))
# or start with something simple: constant target
target_const = np.vstack((70.0*np.ones(len(times)), 50.0*np.ones(len(times))))
# or something simple but not trivial: steps
target_piece = np.vstack((
np.hstack((20.0*np.ones(len(times)/2), 100.0*np.ones(len(times)/2))),
np.hstack((60.0*np.ones(len(times)/2), 30.0*np.ones(len(times)/2)))))
targets = {'complex': target_complex, 'piece': target_piece, 'constant': target_const}
Explanation: Define target motor programs
End of explanation
# choose one target
target_choice = 'complex'
#target_choice = 'constant'
target = copy.copy(targets[target_choice])
# make sure the target smoothly goes to zero at the edges
# this is to match the spiking simulation, which needs some time to ramp
# up in the beginning and time to ramp down at the end
edge_duration = 100.0 # ms
edge_len = int(edge_duration/dt)
tapering_x = np.linspace(0.0, 1.0, edge_len, endpoint=False)
tapering = (3 - 2*tapering_x)*tapering_x**2
target[:, :edge_len] *= tapering
target[:, -edge_len:] *= tapering[::-1]
Explanation: Choose target
End of explanation
class ProgressBar(object):
A callable that displays a widget progress bar and can also make a plot showing
the learning trace.
def __init__(self, simulator, show_graph=True, graph_step=20, max_error=1000):
self.t0 = None
self.float = None
self.show_graph = show_graph
self.graph_step = graph_step
self.simulator = simulator
self.max_error = max_error
self.print_last = True
def __call__(self, i, n):
t = time.time()
if self.t0 is None:
self.t0 = t
t_diff = t - self.t0
current_res = self.simulator._current_res
text = 'step: {} ; time elapsed: {:.1f}s'.format(i, t_diff)
if len(current_res) > 0:
last_error = current_res[-1]['average_error']
if last_error <= self.max_error:
text += ' ; last error: {:.2f}'.format(last_error)
else:
text += ' ; last error: very large'
if self.float is None:
self.float = FloatProgress(min=0, max=100)
display(self.float)
else:
percentage = min(round(i*100.0/n), 100)
self.float.value = percentage
self.float.description = text
if self.show_graph and (i % self.graph_step == 0 or i == n):
crt_res = [_['average_error'] for _ in current_res]
plt.plot(range(len(crt_res)), crt_res, '.-k')
plt.xlim(0, n-1)
plt.xlabel('repetition')
plt.ylabel('error')
if len(crt_res) > 0:
if i < 100:
plt.ylim(np.min(crt_res) - 0.1, np.max(crt_res) + 0.1)
else:
plt.ylim(0, np.max(crt_res))
else:
plt.ylim(0, 1)
clear_output(wait=True)
if i < n:
display(plt.gcf())
if i == n:
self.float.close()
if self.print_last:
print(text)
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
Explanation: General definitions
End of explanation
# start with the best parameters from the experiment matcher
best_params_file = 'best_params_joint.pkl'
with open(best_params_file, 'rb') as inp:
best_params_full = pickle.load(inp)
# keep the values for the juvenile bird
default_params = {}
for key, value in best_params_full.items():
pound_i = key.find('##')
if pound_i >= 0:
if int(key[pound_i+2:]) > 0:
# this is not for the juvenile
continue
key = key[:pound_i]
default_params[key] = value
# add the target, and make sure we have the right tmax and dt
default_params['target'] = target
default_params['tmax'] = tmax
default_params['dt'] = dt
# the number of student neuros per output doesn't have to be so high
default_params['n_student_per_output'] = 40
# the best_params file also has no learning, so let's set better defaults there
default_params['plasticity_learning_rate'] = 0.6e-9
default_params['plasticity_constrain_positive'] = True
default_params['plasticity_taus'] = (80.0, 40.0)
default_params['plasticity_params'] = (1.0, 0.0)
default_params.pop('tutor_rule_gain', None)
default_params['tutor_rule_gain_per_student'] = 0.5
default_params['tutor_rule_tau'] = 0.0
# the best_params also didn't care about the controller -- let's se tthat
default_params['controller_mode'] = 'sum'
default_params['controller_scale'] = 0.5
# save!
defaults_name = 'default_params.pkl'
if not os.path.exists(defaults_name):
with open(defaults_name, 'wb') as out:
pickle.dump(default_params, out, 2)
else:
raise Exception('File exists!')
Explanation: Create default parameters file
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
res['conductor_spike'] = simulation.EventMonitor(simulator.conductor)
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 10 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(12314)
actual_params = dict(default_params)
actual_params['plasticity_params'] = (1.0, 0.0)
actual_params['tutor_rule_tau'] = 80.0
actual_params['progress_indicator'] = ProgressBar
actual_params['tracker_generator'] = tracker_generator
actual_params['snapshot_generator'] = snapshot_generator_pre
simulator = SpikingLearningSimulation(**actual_params)
res = simulator.run(200)
file_name = 'save/spiking_example.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'params': actual_params, 'res': res}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
show_repetition_pattern([_['student_spike'] for _ in res[-10:]], idx=range(10), ms=2.0)
plt.xlim(0, tmax)
crt_times0 = np.asarray(res[-1]['student_spike'].t)
crt_times = crt_times0[crt_times0 < tmax]
print('Average firing rate {:.2f} Hz.').format(len(crt_times)*1000.0/tmax/simulator.student.N)
Explanation: Generate data for figures
Learning curve (blackbox)
End of explanation
# add some noise, but keep things reproducible
np.random.seed(0)
smoothlen = 400
realTarget1 = np.zeros(len(times))
realTarget1[int_r(50.0/dt):int_r(65.0/dt)] = 90.0
realTarget1[int_r(65.0/dt):int_r(75.0/dt)] = 20.0
realTarget1[int_r(75.0/dt):int_r(100.0/dt)] = 90.0
realTarget1[int_r(125.0/dt):int_r(150.0/dt)] = 80.0
realTarget1[int_r(150.0/dt):int_r(160.0/dt)] = 40.0
realTarget1[int_r(250.0/dt):int_r(280.0/dt)] = 80.0
realTarget1[int_r(305.0/dt):int_r(320.0/dt)] = 70.0
realTarget1[int_r(350.0/dt):int_r(360.0/dt)] = 90.0
realTarget1[int_r(410.0/dt):int_r(450.0/dt)] = 100.0
realTarget1[int_r(450.0/dt):int_r(470.0/dt)] = 60.0
realTarget1[int_r(500.0/dt):int_r(540.0/dt)] = 80.0
realTarget1 = np.convolve(realTarget1,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget2 = np.zeros(len(times))
realTarget2[int_r(60.0/dt):int_r(75.0/dt)] = 90.0
realTarget2[int_r(100.0/dt):int_r(115.0/dt)] = 100.0
realTarget2[int_r(265.0/dt):int_r(290.0/dt)] = 90.0
realTarget2[int_r(320.0/dt):int_r(330.0/dt)] = 40.0
realTarget2[int_r(330.0/dt):int_r(365.0/dt)] = 100.0
realTarget2[int_r(385.0/dt):int_r(400.0/dt)] = 90.0
realTarget2[int_r(415.0/dt):int_r(450.0/dt)] = 80.0
realTarget2[int_r(470.0/dt):int_r(480.0/dt)] = 80.0
realTarget2[int_r(520.0/dt):int_r(540.0/dt)] = 90.0
realTarget2 = np.convolve(realTarget2,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget3 = np.zeros(len(times))
realTarget3[int_r(70.0/dt):int_r(100.0/dt)] = 100.0
realTarget3[int_r(160.0/dt):int_r(180.0/dt)] = 100.0
realTarget3[int_r(260.0/dt):int_r(275.0/dt)] = 100.0
realTarget3[int_r(285.0/dt):int_r(310.0/dt)] = 100.0
realTarget3[int_r(340.0/dt):int_r(360.0/dt)] = 100.0
realTarget3[int_r(435.0/dt):int_r(470.0/dt)] = 90.0
realTarget3[int_r(530.0/dt):int_r(540.0/dt)] = 80.0
realTarget3 = np.convolve(realTarget3,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget4 = np.zeros(len(times))
realTarget4[int_r(50.0/dt):int_r(65.0/dt)] = 30.0
realTarget4[int_r(65.0/dt):int_r(85.0/dt)] = 100.0
realTarget4[int_r(135.0/dt):int_r(150.0/dt)] = 90.0
realTarget4[int_r(285.0/dt):int_r(300.0/dt)] = 90.0
realTarget4[int_r(385.0/dt):int_r(405.0/dt)] = 60.0
realTarget4[int_r(430.0/dt):int_r(450.0/dt)] = 100.0
realTarget4[int_r(525.0/dt):int_r(540.0/dt)] = 70.0
realTarget4 = np.convolve(realTarget4,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget5 = np.zeros(len(times))
realTarget5[int_r(75.0/dt):int_r(85.0/dt)] = 20.0
realTarget5[int_r(115.0/dt):int_r(130.0/dt)] = 60.0
realTarget5[int_r(180.0/dt):int_r(200.0/dt)] = 90.0
realTarget5[int_r(265.0/dt):int_r(290.0/dt)] = 100.0
realTarget5[int_r(325.0/dt):int_r(350.0/dt)] = 70.0
realTarget5[int_r(410.0/dt):int_r(420.0/dt)] = 80.0
realTarget5[int_r(440.0/dt):int_r(455.0/dt)] = 70.0
realTarget5[int_r(535.0/dt):int_r(545.0/dt)] = 20.0
realTarget5 = np.convolve(realTarget5,
np.exp(-0.5*np.linspace(-3.0, 3.0, smoothlen)**2)/np.sqrt(2*np.pi)/80,
mode='same')
realTarget = np.vstack((realTarget1, realTarget2, realTarget3, realTarget4, realTarget5))
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
res['conductor_spike'] = simulation.EventMonitor(simulator.conductor)
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 10 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(12314)
actual_params = dict(default_params)
actual_params['target'] = realTarget
actual_params['plasticity_params'] = (1.0, 0.0)
actual_params['tutor_rule_tau'] = 80.0
actual_params['progress_indicator'] = ProgressBar
actual_params['tracker_generator'] = tracker_generator
actual_params['snapshot_generator'] = snapshot_generator_pre
actual_params['tutor_rule_gain_per_student'] = 1.0
actual_params['plasticity_learning_rate'] = 1e-9
#actual_params['n_student_per_output'] = 10
#actual_params['controller_scale'] = 0.5*4
simulator = SpikingLearningSimulation(**actual_params)
res = simulator.run(600)
file_name = 'save/spiking_example_realistic.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'params': actual_params, 'res': res}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, realTarget, dt)
Explanation: Learning curve blackbox, realistic target
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
res['conductor_spike'] = simulation.EventMonitor(simulator.conductor)
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 10 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(12314)
actual_params = dict(default_params)
actual_params['plasticity_params'] = (1.0, 0.0)
actual_params['tutor_rule_tau'] = 80.0
actual_params['progress_indicator'] = ProgressBar
actual_params['tracker_generator'] = tracker_generator
actual_params['snapshot_generator'] = snapshot_generator_pre
actual_params['student_g_inh'] = 0
actual_params['student_i_external'] = -0.23
simulator = SpikingLearningSimulation(**actual_params)
res = simulator.run(200)
file_name = 'save/spiking_example_const_inh.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'params': actual_params, 'res': res}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
Explanation: Learning curve (blackbox), constant inhibition
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# load the default parameters
with open('default_params.pkl', 'rb') as inp:
default_params = pickle.load(inp)
# keep things arbitrary but reproducible
np.random.seed(212312)
args = dict(default_params)
args['relaxation'] = 200.0
args['relaxation_conductor'] = 200.0
args['tutor_tau_out'] = 40.0
args['tutor_rule_type'] = 'reinforcement'
args['tutor_rule_learning_rate'] = 0.004
args['tutor_rule_compress_rates'] = True
args['tutor_rule_relaxation'] = None
args['tutor_rule_tau'] = 0.0
args['plasticity_params'] = (1.0, 0.0)
args['plasticity_constrain_positive'] = True
args['plasticity_learning_rate'] = 7e-10
args_actual = dict(args)
args_actual['tracker_generator'] = tracker_generator
args_actual['snapshot_generator'] = snapshot_generator_pre
args_actual['progress_indicator'] = ProgressBar
simulator = SpikingLearningSimulation(**args_actual)
res = simulator.run(10000)
# save!
file_name = 'save/reinforcement_example_0ms.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res': res, 'args': args}, out, 2)
else:
raise Exception('File exists!')
Explanation: Reinforcement example (0 ms)
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# keep things arbitrary but reproducible
np.random.seed(212312)
args = dict(
target=target, tmax=tmax, dt=dt,
n_conductor=300, n_student_per_output=40,
relaxation=200.0, relaxation_conductor=200.0, # XXX different from blackbox!
conductor_rate_during_burst=769.7,
controller_mode='sum',
controller_scale=0.5,
tutor_tau_out=40.0,
tutor_rule_type='reinforcement',
tutor_rule_learning_rate=0.006,
tutor_rule_compress_rates=True,
tutor_rule_tau=80.0,
tutor_rule_relaxation=None, # XXX different from blackbox!
cs_weights_fraction=0.488, ts_weights=0.100,
plasticity_constrain_positive=True,
plasticity_learning_rate=6e-10,
plasticity_taus=(80.0, 40.0),
plasticity_params=(1.0, 0.0),
student_R=383.4, student_g_inh=1.406,
student_tau_ampa=5.390, student_tau_nmda=81.92,
student_tau_m=20.31, student_tau_ref=1.703,
student_vR=-74.39, student_v_th=-45.47
)
args_actual = dict(args)
args_actual['tracker_generator'] = tracker_generator
args_actual['snapshot_generator'] = snapshot_generator_pre
args_actual['progress_indicator'] = ProgressBar
simulator = SpikingLearningSimulation(**args_actual)
res = simulator.run(16000)
# save!
file_name = 'save/reinforcement_example_80ms.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res': res, 'args': args}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
Explanation: Reinforcement example (80 ms)
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
if i % 10 == 0:
res['tutor'] = simulation.StateMonitor(simulator.tutor, 'out')
res['student_spike'] = simulation.EventMonitor(simulator.student)
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i % 50 == 0:
res['weights'] = np.copy(simulator.conductor_synapses.W)
return res
# keep things arbitrary but reproducible
np.random.seed(12234)
args = dict(default_params)
args['relaxation'] = 200.0
args['relaxation_conductor'] = 200.0
args['tutor_tau_out'] = 40.0
args['tutor_rule_type'] = 'reinforcement'
args['tutor_rule_learning_rate'] = 0.004
args['tutor_rule_compress_rates'] = True
args['tutor_rule_relaxation'] = None
args['tutor_rule_tau'] = 440.0
args['plasticity_params'] = (10.0, 9.0)
args['plasticity_constrain_positive'] = True
args['plasticity_learning_rate'] = 7e-10
args_actual = dict(args)
args_actual['tracker_generator'] = tracker_generator
args_actual['snapshot_generator'] = snapshot_generator_pre
args_actual['progress_indicator'] = ProgressBar
simulator = SpikingLearningSimulation(**args_actual)
res = simulator.run(10000)
# save!
file_name = 'save/reinforcement_example_a10b9_440ms.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res': res, 'args': args}, out, 2)
else:
raise Exception('File exists!')
plot_evolution(res, target, dt)
Explanation: Reinforcement example (alpha=10, beta=9, tau=440 ms)
End of explanation
file_name = 'spike_out/songspike_tscale_batch_8.8.160525.1530_summary.pkl'
with open(file_name, 'rb') as inp:
mismatch_data = pickle.load(inp)
make_heatmap_plot(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
vmin=1.0, vmax=10, sim_idx=250)
safe_save_fig('figs/spiking_mismatch_heatmap_sum_log_8', png=False)
make_convergence_map(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
max_steps=250)
safe_save_fig('figs/spiking_mismatch_convmap_sum_log_8', png=False)
Explanation: Make figures
Tutor-student mismatch heatmap and convergence map -- blackbox spiking
The data for this needs to be generated using the summarize.py script from the results of the run_tscale_batch.py script, which is designed to run on a cluster.
End of explanation
file_name = 'spike_out/songspike_tscale_batch_12.12.161122.1802_summary.pkl'
with open(file_name, 'rb') as inp:
mismatch_data = pickle.load(inp)
make_heatmap_plot(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
vmin=0.5, vmax=10, sim_idx=999)
safe_save_fig('figs/spiking_mismatch_heatmap_sum_log_12', png=False)
make_convergence_map(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
max_steps=999)
safe_save_fig('figs/spiking_mismatch_convmap_sum_log_12', png=False)
error_matrix = np.asarray([[_[-1] for _ in crt_res] for crt_res in mismatch_data['res_array']])
error_matrix[~np.isfinite(error_matrix)] = np.inf
tau_levels = np.asarray([_['tutor_rule_tau'] for _ in mismatch_data['args_array'][0]])
plt.semilogx(tau_levels, np.diag(error_matrix), '.-k')
Explanation: Tutor-student bigger mismatch heatmap and convergence map -- blackbox spiking
The data for this needs to be generated using the summarize.py script from the results of the run_tscale_batch.py script, which is designed to run on a cluster.
End of explanation
file_name = 'spike_out/song_reinf_tscale_batch_8.8.160607.1153_summary.pkl'
with open(file_name, 'rb') as inp:
mismatch_data = pickle.load(inp)
make_heatmap_plot(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'],
vmin=1.0, vmax=10)
safe_save_fig('figs/reinforcement_mismatch_heatmap_sum_log_8', png=False)
make_convergence_map(mismatch_data['res_array'], args_matrix=mismatch_data['args_array'], max_error=12)
safe_save_fig('figs/reinforcement_mismatch_convmap_sum_log_8', png=False)
Explanation: Tutor-student mismatch heatmap and convergence map -- reinforcement
End of explanation
with open('save/spiking_example.pkl', 'rb') as inp:
spiking_example_data = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(spiking_example_data['res'], plt.gca(), target_lw=2,
inset=True, inset_pos=[0.4, 0.4, 0.4, 0.4],
alpha=spiking_example_data['params']['plasticity_params'][0],
beta=spiking_example_data['params']['plasticity_params'][1],
tau_tutor=spiking_example_data['params']['tutor_rule_tau'],
target=spiking_example_data['params']['target'])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/spiking_example_learning_curve', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][:5]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_juvenile', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][-5:]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_adult', png=False)
Explanation: Spiking example learning curve and raster plots
End of explanation
with open('save/spiking_example_realistic.pkl', 'rb') as inp:
spiking_example_data = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(spiking_example_data['res'], plt.gca(), target_lw=2,
inset=True, inset_pos=[0.4, 0.45, 0.4, 0.4],
legend_pos=(0.7, 1.1),
alpha=spiking_example_data['params']['plasticity_params'][0],
beta=spiking_example_data['params']['plasticity_params'][1],
tau_tutor=spiking_example_data['params']['tutor_rule_tau'],
target=spiking_example_data['params']['target'])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/spiking_example_realistic_learning_curve', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][:5]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 87, 123, 165])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_realistic_juvenile', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][-5:]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 87, 123, 165])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
safe_save_fig('figs/spiking_simraster_realistic_adult', png=False)
make_convergence_movie('figs/spiking_convergence_movie_small_tau.mov',
spiking_example_data['res'], spiking_example_data['params']['target'],
idxs=range(0, 600), length=12.0,
ymax=80.0)
Explanation: Spiking example learning curve and raster plots, realistic target
End of explanation
with open('save/spiking_example_const_inh.pkl', 'rb') as inp:
spiking_example_data = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(spiking_example_data['res'], plt.gca(), target_lw=2,
inset=True, inset_pos=[0.4, 0.4, 0.4, 0.4],
alpha=spiking_example_data['params']['plasticity_params'][0],
beta=spiking_example_data['params']['plasticity_params'][1],
tau_tutor=spiking_example_data['params']['tutor_rule_tau'],
target=spiking_example_data['params']['target'])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/spiking_example_const_inh_learning_curve', png=False)
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][:5]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
plt.figure(figsize=(3, 1))
crt_res = spiking_example_data['res'][-5:]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 47, 23, 65, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
plt.xlim(0, tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1],
spiking_example_data['params']['tmax']))
make_convergence_movie('figs/spiking_convergence_movie_const_inh.mov',
spiking_example_data['res'], spiking_example_data['params']['target'],
idxs=range(0, 200), length=4.0,
ymax=80.0)
Explanation: Spiking example, constant inhibition, learning curve and raster plots
End of explanation
with open('save/reinforcement_example_0ms.pkl', 'rb') as inp:
reinf_shorttau = pickle.load(inp)
plt.imshow(reinf_shorttau['res'][7500]['weights'], aspect='auto', interpolation='nearest',
cmap='Blues', vmin=0, vmax=0.3)
plt.colorbar()
plot_evolution(reinf_shorttau['res'],
reinf_shorttau['args']['target'],
reinf_shorttau['args']['dt'])
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(reinf_shorttau['res'][:-9], plt.gca(), target_lw=2,
inset=True,
alpha=reinf_shorttau['args']['plasticity_params'][0],
beta=reinf_shorttau['args']['plasticity_params'][1],
tau_tutor=reinf_shorttau['args']['tutor_rule_tau'],
target=reinf_shorttau['args']['target'],
inset_pos=[0.52, 0.45, 0.4, 0.4])
axs[0].set_xticks(range(0, 8001, 2000))
axs[0].set_ylim(0, 15);
axs[1].set_yticks(range(0, 81, 20));
safe_save_fig('figs/reinforcement_convergence_plot_small_tau', png=False)
plt.figure(figsize=(3, 1))
crt_res = reinf_shorttau['res'][:50:10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 45, 75, 65, 57])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_shorttau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
safe_save_fig('figs/reinforcement_simraster_juvenile', png=False)
plt.figure(figsize=(3, 1))
crt_res = reinf_shorttau['res'][-50::10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[1, 45, 75, 65, 57])
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_shorttau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
safe_save_fig('figs/reinforcement_simraster_adult', png=False)
make_convergence_movie('figs/reinforcement_convergence_movie_small_tau.mov',
reinf_shorttau['res'], reinf_shorttau['args']['target'],
idxs=range(0, 10000), length=10.0,
ymax=80.0)
Explanation: Reinforcement example learning curves
Reinforcement learning curve, small tau
End of explanation
with open('save/reinforcement_example_a10b9_440ms.pkl', 'rb') as inp:
reinf_longtau = pickle.load(inp)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(reinf_longtau['res'][:-9], plt.gca(), target_lw=2,
inset=True,
alpha=reinf_longtau['args']['plasticity_params'][0],
beta=reinf_longtau['args']['plasticity_params'][1],
tau_tutor=reinf_longtau['args']['tutor_rule_tau'],
target=reinf_longtau['args']['target'],
inset_pos=[0.5, 0.45, 0.4, 0.4])
axs[0].set_xticks(range(0, 8001, 2000))
axs[0].set_ylim(0, 15);
axs[1].set_yticks(range(0, 81, 20));
safe_save_fig('figs/reinforcement_convergence_plot_large_tau', png=False)
plt.figure(figsize=(3, 1))
crt_res = reinf_longtau['res'][:50:10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[3, 48, 19, 62, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_longtau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
plt.figure(figsize=(3, 1))
crt_res = reinf_longtau['res'][-50::10]
show_repetition_pattern([_['student_spike'] for _ in crt_res], idx=[3, 48, 19, 62, 78], ms=1.0)
plt.gca().spines['top'].set_color('none')
plt.gca().spines['right'].set_color('none')
plt.gca().spines['left'].set_color('none')
plt.yticks([])
plt.ylabel('')
crt_tmax = reinf_longtau['args']['tmax'];
plt.xlim(0, crt_tmax);
print('Firing rate {:.2f} Hz.').format(get_firing_rate(crt_res[-1], crt_tmax))
make_convergence_movie('figs/reinforcement_convergence_movie_large_tau.mov',
reinf_longtau['res'], reinf_longtau['args']['target'],
idxs=range(0, 10000), length=10.0,
ymax=80.0)
Explanation: Reinforcement learning curve, long tau
End of explanation
with open('save/reinforcement_example_0ms.pkl', 'rb') as inp:
reinf_shorttau = pickle.load(inp)
motor_idxs = range(0, len(reinf_shorttau['res']), 50)
weight_sparsity = [np.sum(reinf_shorttau['res'][_]['weights'] > 0.01)/
(reinf_shorttau['args']['n_student_per_output']*len(reinf_shorttau['args']['target']))
for _ in motor_idxs]
plt.figure(figsize=(3, 2))
plt.plot(motor_idxs, weight_sparsity, color=[0.200, 0.357, 0.400])
plt.xlabel('repetitions')
plt.ylabel('HVC inputs per RA neuron')
plt.ylim(0, 200);
plt.grid(True)
safe_save_fig('figs/inputs_per_ra_evolution_reinf')
Explanation: Reinforcement learning, evolution of synapse sparsity
End of explanation |
8,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https
Step2: Function to run a single experiment with a list of agents.
Step4: Function to read actions of the policies and to compute the kernel based on them.
Step5: Get samples of the policies returns.
Step6: Load information about the policies
Step7: Experimental parameters.
Step8: Run the experiment!
Please note that it may take several hours to run the full experiment. For experimentation purposes, in order to get the results faster, the number of runs and the number of steps may be reduced. To verify that the code runs you can start with 10 experiments (n_experiments) with 10 steps (n_steps).
Step9: Plot the regrets. | Python Code:
import bandit
import multiarm_model
import arm_model
import agents as agent_classes
import kernel as kernel_classes
import random
from typing import List, Dict
import pickle as pkl
from scipy import spatial
import numpy as np
import copy
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
End of explanation
def run_experiment(agents: Dict[str, agent_classes.Agent], world, n_steps: int,
best_mean: float,
true_mean: np.ndarray) -> Dict[str, List[float]]:
Given a number of agents, run one experiments with world for n_steps.
current_sim_regrets = {name: [] for name in agents}
for _ in range(n_steps):
for agent_name in agents:
a = agents[agent_name].select_action()
r = world.pull(a)
agents[agent_name].update(a, r)
current_sim_regrets[agent_name].append(
best_mean - true_mean[agents[agent_name].best_arm])
return current_sim_regrets
Explanation: Function to run a single experiment with a list of agents.
End of explanation
def get_experiment_distances(policies: Dict[str, float]):
Get the policy distances of the current experiment.
# Load actions that the policies take.
with open('./data/actions.pkl', 'rb') as f:
actions_policy_keys_in_distances = pkl.load(f)
all_predictions = actions_policy_keys_in_distances['actions']
policy_keys_in_distances = actions_policy_keys_in_distances['policy_keys']
# Compute distances between actions.
all_distances = []
for i in range(np.shape(np.array(all_predictions))[0]):
distance = spatial.distance.cdist(
np.array(all_predictions)[i, :, :],
np.array(all_predictions)[i, :, :], 'euclidean')
all_distances.append(distance)
distances = np.mean(np.array(all_distances), axis=0)
# Depending on a subset of selected policies and their order, get a smaller distance matrix.
experiment_distances = kernel_classes.select_experiment_distances(
policies, policy_keys_in_distances, distances)
return experiment_distances
Explanation: Function to read actions of the policies and to compute the kernel based on them.
End of explanation
with open('./data/full_reward_samples.pkl', 'rb') as f:
full_reward_samples_dict = pkl.load(f)
Explanation: Get samples of the policies returns.
End of explanation
with open('./data/ope_values.pkl', 'rb') as f:
ope_values = pkl.load(f)
Explanation: Load information about the policies: their ids, FQE estimates and GT performance.
End of explanation
n_experiments = 100 # @param
n_steps = 100 # @param
num_policies = 50 # @param
use_fqe = True # @param
use_prior = False
if not use_fqe:
use_prior = True
# GP optimizer
optimizer_config = dict(
optimizer_name='Adam',
learning_rate=0.001,
steps_per_update=1000,
kwargs=dict(
beta1=0.9,
beta2=0.999,
),
)
# Priors.
arm_kwargs = [{
'prior_mean': 0.0,
'prior_std': 1000.,
'alpha': 1.,
'beta': 1000.,
'sample': False,
'steps': 10,
'burnin': 0
} for _ in range(num_policies)]
prior = dict(use_prior=use_prior, alpha=1., beta=200.)
sim_regrets = None
regrets_fqe = []
Explanation: Experimental parameters.
End of explanation
for _ in range(n_experiments):
print(_)
# Make a world with randomly selected policies.
np.random.seed(seed=805 + _)
# selected_policies_fqe = task_data.iloc[np.random.choice(len(task_data), num_policies, replace=False)]
selected_policies_fqe = dict(
random.sample(ope_values.items(), k=num_policies))
world = bandit.MAB(selected_policies_fqe)
world.load_reward_samples(full_reward_samples_dict)
true_mean = np.array([np.mean(rs) for rs in world._rewards])
best_mean = true_mean.max()
# FQE instead of a-ops.
regret_fqe = best_mean - true_mean[np.argmax(world.opes)]
regrets_fqe.append(regret_fqe)
# Independent arm model.
model_ind = multiarm_model.IndependentMultiArmModel(
world.num_arms,
arm_model.SingleBayesArm,
arm_args=None,
arm_kwargs=arm_kwargs)
if use_fqe:
for a in range(len(world.opes)):
model_ind.update(a, world.opes[a])
# Kernel for GP.
experiment_distances = get_experiment_distances(
policies=selected_policies_fqe)
kernel = kernel_classes.ActionDistanceMatern12(
experiment_distances,
lengthscale=np.median(experiment_distances),
bias_variance=10.,
variance_prior=prior)
# GP as arm model.
model_gp = multiarm_model.MVNormal(
num_arms=world.num_arms,
offset=0.,
kernel=kernel,
observation_noise_variance=1000.,
optimizer_config=optimizer_config,
observation_noise_variance_prior=prior)
if use_fqe:
for a in range(len(world.opes)):
model_gp.update(a, world.opes[a])
# Make agents.
agent_ind_uniform = agent_classes.UniformAgent(copy.deepcopy(model_ind))
agent_gp_uniform = agent_classes.UniformAgent(copy.deepcopy(model_gp))
if use_fqe:
agent_ind_ucb = agent_classes.UCBAgent(
copy.deepcopy(model_ind), exploration_coef=5)
agent_gp_ucb = agent_classes.UCBAgent(
copy.deepcopy(model_gp), exploration_coef=5)
# If no FQE is available at the start, first sample a few datapoints.
else:
agent_ind_ucb = agent_classes.UCBAgent(
copy.deepcopy(model_ind), exploration_coef=5, initial_rand_samples=5)
agent_gp_ucb = agent_classes.UCBAgent(
copy.deepcopy(model_gp), exploration_coef=5, initial_rand_samples=5)
agents = {
'Ind+Uniform+OPE': agent_ind_uniform,
'Ind+UCB+OPE': agent_ind_ucb,
'GP+Uniform+OPE': agent_gp_uniform,
'A-ops': agent_gp_ucb
}
# Run experiment and collect results data.
current_sim_regrets = run_experiment(
agents=agents,
world=world,
n_steps=n_steps,
best_mean=best_mean,
true_mean=true_mean)
if sim_regrets is None:
sim_regrets = {name: [] for name in agents}
for agent_name in agents:
sim_regrets[agent_name].append(current_sim_regrets[agent_name])
Explanation: Run the experiment!
Please note that it may take several hours to run the full experiment. For experimentation purposes, in order to get the results faster, the number of runs and the number of steps may be reduced. To verify that the code runs you can start with 10 experiments (n_experiments) with 10 steps (n_steps).
End of explanation
plt.rcParams.update(plt.rcParamsDefault)
plt.rc('font', size=8.0)
plt.rc('figure', figsize=(2, 1.5))
plt.rc('axes', linewidth=0.5, titlesize=6.0)
plt.rc('legend', fontsize=7.0, frameon=False)
plt.rc('lines', markersize=1, linewidth=0.5)
plt.rc('xtick', direction='out')
plt.rc('ytick', direction='out')
n_steps = len(sim_regrets['Ind+Uniform+OPE'][0])
line_colors = {
'Ind+Uniform+OPE': plt.cm.Paired(3),
'Ind+UCB+OPE': plt.cm.Paired(3),
'GP+Uniform+OPE': plt.cm.Paired(9),
'A-ops': plt.cm.Paired(9)
}
fill_colors = {
'Ind+Uniform+OPE': plt.cm.Paired(2),
'Ind+UCB+OPE': plt.cm.Paired(3),
'GP+Uniform+OPE': plt.cm.Paired(8),
'A-ops': plt.cm.Paired(9)
}
line_styles = {
'Ind+Uniform+OPE': '--',
'Ind+UCB+OPE': '-',
'GP+Uniform+OPE': '--',
'A-ops': '-'
}
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1])
# Plot OPE performance with standard deviation
regret_fqe = np.mean(np.array(regrets_fqe))
regret_std = np.std(np.array(regrets_fqe))
plt.plot([0, n_steps - 1], [regret_fqe, regret_fqe],
'-.',
lw=1,
color='black',
label='OPE')
plt.fill_between([0, n_steps - 1],
[regret_fqe - regret_std / 2, regret_fqe - regret_std / 2],
[regret_fqe + regret_std / 2, regret_fqe + regret_std / 2],
color='k',
alpha=0.05,
linewidth=0.0)
# Plot agent's performance with standard deviation
for agent_name in sim_regrets:
regret_agent = np.mean(np.array(sim_regrets[agent_name]), axis=0)
regret_std = np.std(np.array(sim_regrets[agent_name]), axis=0)
plt.plot(
range(n_steps),
regret_agent,
line_styles[agent_name],
lw=1,
color=line_colors[agent_name],
label=agent_name)
plt.fill_between(
range(n_steps),
regret_agent - regret_std / 2,
regret_agent + regret_std / 2,
facecolor=fill_colors[agent_name],
alpha=0.3,
linewidth=0.0)
plt.xlabel('# trajectories')
plt.ylabel('simple regret')
plt.title('cartpole_swingup' + '\nSimple regret')
plt.legend(loc='upper right', ncol=1, bbox_to_anchor=(1.7, 1))
plt.show()
Explanation: Plot the regrets.
End of explanation |
8,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Make figures more publication ready
In this example, we show several use cases to take MNE plots and
customize them for a more publication-ready look.
Step1:
Step2: Evoked plot with brain activation
Suppose we want a figure with an evoked plot on top, and the brain activation
below, with the brain subplot slightly bigger than the evoked plot. Let's
start by loading some example data <sample-dataset>.
Step3: During interactive plotting, we might see figures like this
Step4: To make a publication-ready figure, first we'll re-plot the brain on a white
background, take a screenshot of it, and then crop out the white margins.
While we're at it, let's change the colormap, set custom colormap limits and
remove the default colorbar (so we can add a smaller, vertical one later)
Step5: Now let's crop out the white margins and the white gap between hemispheres.
The screenshot has dimensions (h, w, 3), with the last axis being R, G, B
values for each pixel, encoded as integers between 0 and 255. (255,
255, 255) encodes a white pixel, so we'll detect any pixels that differ
from that
Step6: A lot of figure settings can be adjusted after the figure is created, but
many can also be adjusted in advance by updating the
Step7: Now let's create our custom figure. There are lots of ways to do this step.
Here we'll create the figure and the subplot axes in one step, specifying
overall figure size, number and arrangement of subplots, and the ratio of
subplot heights for each row using
Step8: Custom timecourse with montage inset
Suppose we want a figure with some mean timecourse extracted from a number of
sensors, and we want a smaller panel within the figure to show a head outline
with the positions of those sensors clearly marked.
If you are familiar with MNE, you know that this is something that
Step9: Let's make a plot.
Step10: So far so good. Now let's add the smaller figure within the figure to show
exactly, which sensors we used to make the timecourse.
For that, we use an "inset_axes" that we plot into our existing axes.
The head outline with the sensor positions can be plotted using the
~mne.io.Raw object that is the source of our data.
Specifically, that object already contains all the sensor positions,
and we can plot them using the plot_sensors method.
Step11: That looks nice. But the sensor dots are way too big for our taste. Luckily,
all MNE-Python plots use Matplotlib under the hood and we can customize
each and every facet of them.
To make the sensor dots smaller, we need to first get a handle on them to
then apply a *.set_* method on them.
Step12: That's quite a a lot of objects, but we know that we want to change the
sensor dots, and those are most certainly a "PathCollection" object.
So let's have a look at how many "collections" we have in the axes.
Step13: There is only one! Those must be the sensor dots we were looking for.
We finally found exactly what we needed. Sometimes this can take a bit of
experimentation. | Python Code:
# Authors: Eric Larson <[email protected]>
# Daniel McCloy <[email protected]>
# Stefan Appelhoff <[email protected]>
#
# License: BSD (3-clause)
Explanation: Make figures more publication ready
In this example, we show several use cases to take MNE plots and
customize them for a more publication-ready look.
End of explanation
import os.path as op
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import (make_axes_locatable, ImageGrid,
inset_locator)
import mne
Explanation: :depth: 1
Imports
We are importing everything we need for this example:
End of explanation
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_stc = op.join(data_path, 'MEG', 'sample', 'sample_audvis-meg-eeg-lh.stc')
fname_evoked = op.join(data_path, 'MEG', 'sample', 'sample_audvis-ave.fif')
evoked = mne.read_evokeds(fname_evoked, 'Left Auditory')
evoked.pick_types(meg='grad').apply_baseline((None, 0.))
max_t = evoked.get_peak()[1]
stc = mne.read_source_estimate(fname_stc)
Explanation: Evoked plot with brain activation
Suppose we want a figure with an evoked plot on top, and the brain activation
below, with the brain subplot slightly bigger than the evoked plot. Let's
start by loading some example data <sample-dataset>.
End of explanation
evoked.plot()
stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample',
subjects_dir=subjects_dir, initial_time=max_t,
time_viewer=False, show_traces=False)
Explanation: During interactive plotting, we might see figures like this:
End of explanation
colormap = 'viridis'
clim = dict(kind='value', lims=[4, 8, 12])
# Plot the STC, get the brain image, crop it:
brain = stc.plot(views='lat', hemi='split', size=(800, 400), subject='sample',
subjects_dir=subjects_dir, initial_time=max_t, background='w',
colorbar=False, clim=clim, colormap=colormap,
time_viewer=False, show_traces=False)
screenshot = brain.screenshot()
brain.close()
Explanation: To make a publication-ready figure, first we'll re-plot the brain on a white
background, take a screenshot of it, and then crop out the white margins.
While we're at it, let's change the colormap, set custom colormap limits and
remove the default colorbar (so we can add a smaller, vertical one later):
End of explanation
nonwhite_pix = (screenshot != 255).any(-1)
nonwhite_row = nonwhite_pix.any(1)
nonwhite_col = nonwhite_pix.any(0)
cropped_screenshot = screenshot[nonwhite_row][:, nonwhite_col]
# before/after results
fig = plt.figure(figsize=(4, 4))
axes = ImageGrid(fig, 111, nrows_ncols=(2, 1), axes_pad=0.5)
for ax, image, title in zip(axes, [screenshot, cropped_screenshot],
['Before', 'After']):
ax.imshow(image)
ax.set_title('{} cropping'.format(title))
Explanation: Now let's crop out the white margins and the white gap between hemispheres.
The screenshot has dimensions (h, w, 3), with the last axis being R, G, B
values for each pixel, encoded as integers between 0 and 255. (255,
255, 255) encodes a white pixel, so we'll detect any pixels that differ
from that:
End of explanation
# Tweak the figure style
plt.rcParams.update({
'ytick.labelsize': 'small',
'xtick.labelsize': 'small',
'axes.labelsize': 'small',
'axes.titlesize': 'medium',
'grid.color': '0.75',
'grid.linestyle': ':',
})
Explanation: A lot of figure settings can be adjusted after the figure is created, but
many can also be adjusted in advance by updating the
:data:~matplotlib.rcParams dictionary. This is especially useful when your
script generates several figures that you want to all have the same style:
End of explanation
# figsize unit is inches
fig, axes = plt.subplots(nrows=2, ncols=1, figsize=(4.5, 3.),
gridspec_kw=dict(height_ratios=[3, 4]))
# alternate way #1: using subplot2grid
# fig = plt.figure(figsize=(4.5, 3.))
# axes = [plt.subplot2grid((7, 1), (0, 0), rowspan=3),
# plt.subplot2grid((7, 1), (3, 0), rowspan=4)]
# alternate way #2: using figure-relative coordinates
# fig = plt.figure(figsize=(4.5, 3.))
# axes = [fig.add_axes([0.125, 0.58, 0.775, 0.3]), # left, bot., width, height
# fig.add_axes([0.125, 0.11, 0.775, 0.4])]
# we'll put the evoked plot in the upper axes, and the brain below
evoked_idx = 0
brain_idx = 1
# plot the evoked in the desired subplot, and add a line at peak activation
evoked.plot(axes=axes[evoked_idx])
peak_line = axes[evoked_idx].axvline(max_t, color='#66CCEE', ls='--')
# custom legend
axes[evoked_idx].legend(
[axes[evoked_idx].lines[0], peak_line], ['MEG data', 'Peak time'],
frameon=True, columnspacing=0.1, labelspacing=0.1,
fontsize=8, fancybox=True, handlelength=1.8)
# remove the "N_ave" annotation
axes[evoked_idx].texts = []
# Remove spines and add grid
axes[evoked_idx].grid(True)
axes[evoked_idx].set_axisbelow(True)
for key in ('top', 'right'):
axes[evoked_idx].spines[key].set(visible=False)
# Tweak the ticks and limits
axes[evoked_idx].set(
yticks=np.arange(-200, 201, 100), xticks=np.arange(-0.2, 0.51, 0.1))
axes[evoked_idx].set(
ylim=[-225, 225], xlim=[-0.2, 0.5])
# now add the brain to the lower axes
axes[brain_idx].imshow(cropped_screenshot)
axes[brain_idx].axis('off')
# add a vertical colorbar with the same properties as the 3D one
divider = make_axes_locatable(axes[brain_idx])
cax = divider.append_axes('right', size='5%', pad=0.2)
cbar = mne.viz.plot_brain_colorbar(cax, clim, colormap, label='Activation (F)')
# tweak margins and spacing
fig.subplots_adjust(
left=0.15, right=0.9, bottom=0.01, top=0.9, wspace=0.1, hspace=0.5)
# add subplot labels
for ax, label in zip(axes, 'AB'):
ax.text(0.03, ax.get_position().ymax, label, transform=fig.transFigure,
fontsize=12, fontweight='bold', va='top', ha='left')
Explanation: Now let's create our custom figure. There are lots of ways to do this step.
Here we'll create the figure and the subplot axes in one step, specifying
overall figure size, number and arrangement of subplots, and the ratio of
subplot heights for each row using :mod:GridSpec keywords
<matplotlib.gridspec>. Other approaches (using
:func:~matplotlib.pyplot.subplot2grid, or adding each axes manually) are
shown commented out, for reference.
End of explanation
data_path = mne.datasets.sample.data_path()
fname_raw = op.join(data_path, "MEG", "sample", "sample_audvis_raw.fif")
raw = mne.io.read_raw_fif(fname_raw)
# For the sake of the example, we focus on EEG data
raw.pick_types(meg=False, eeg=True)
Explanation: Custom timecourse with montage inset
Suppose we want a figure with some mean timecourse extracted from a number of
sensors, and we want a smaller panel within the figure to show a head outline
with the positions of those sensors clearly marked.
If you are familiar with MNE, you know that this is something that
:func:mne.viz.plot_compare_evokeds does, see an example output in
ex-hf-sef-data at the bottom.
In this part of the example, we will show you how to achieve this result on
your own figure, without having to use :func:mne.viz.plot_compare_evokeds!
Let's start by loading some example data <sample-dataset>.
End of explanation
# channels to plot:
to_plot = [f"EEG {i:03}" for i in range(1, 5)]
# get the data for plotting in a short time interval from 10 to 20 seconds
start = int(raw.info['sfreq'] * 10)
stop = int(raw.info['sfreq'] * 20)
data, times = raw.get_data(picks=to_plot,
start=start, stop=stop, return_times=True)
# Scale the data from the MNE internal unit V to µV
data *= 1e6
# Take the mean of the channels
mean = np.mean(data, axis=0)
# make a figure
fig, ax = plt.subplots(figsize=(4.5, 3))
# plot some EEG data
ax.plot(times, mean)
Explanation: Let's make a plot.
End of explanation
# recreate the figure (only necessary for our documentation server)
fig, ax = plt.subplots(figsize=(4.5, 3))
ax.plot(times, mean)
axins = inset_locator.inset_axes(ax, width="30%", height="30%", loc=2)
# pick_channels() edits the raw object in place, so we'll make a copy here
# so that our raw object stays intact for potential later analysis
raw.copy().pick_channels(to_plot).plot_sensors(title="", axes=axins)
Explanation: So far so good. Now let's add the smaller figure within the figure to show
exactly, which sensors we used to make the timecourse.
For that, we use an "inset_axes" that we plot into our existing axes.
The head outline with the sensor positions can be plotted using the
~mne.io.Raw object that is the source of our data.
Specifically, that object already contains all the sensor positions,
and we can plot them using the plot_sensors method.
End of explanation
# If we inspect our axes we find the objects contained in our plot:
print(axins.get_children())
Explanation: That looks nice. But the sensor dots are way too big for our taste. Luckily,
all MNE-Python plots use Matplotlib under the hood and we can customize
each and every facet of them.
To make the sensor dots smaller, we need to first get a handle on them to
then apply a *.set_* method on them.
End of explanation
print(axins.collections)
Explanation: That's quite a a lot of objects, but we know that we want to change the
sensor dots, and those are most certainly a "PathCollection" object.
So let's have a look at how many "collections" we have in the axes.
End of explanation
sensor_dots = axins.collections[0]
# Recreate the figure once more; shrink the sensor dots; add axis labels
fig, ax = plt.subplots(figsize=(4.5, 3))
ax.plot(times, mean)
axins = inset_locator.inset_axes(ax, width="30%", height="30%", loc=2)
raw.copy().pick_channels(to_plot).plot_sensors(title="", axes=axins)
sensor_dots = axins.collections[0]
sensor_dots.set_sizes([1])
# add axis labels, and adjust bottom figure margin to make room for them
ax.set(xlabel="Time (s)", ylabel="Amplitude (µV)")
fig.subplots_adjust(bottom=0.2)
Explanation: There is only one! Those must be the sensor dots we were looking for.
We finally found exactly what we needed. Sometimes this can take a bit of
experimentation.
End of explanation |
8,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Network Traffic Forecasting with AutoTS
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demostrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to use AutoTS in project Chronos to do time series forecasting in an automated and distributed way.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
Step3: Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
Step4: Below are some example records of the data
Step5: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 3 parts
Step6: Here, we drop weeks with more than 3 consecutive missing values and fill other missing values remained.
Step7: Plot the data to see how the KPI's look like
Step8: Time series forecasting with AutoTS
AutoTS provides AutoML support for building end-to-end time series analysis pipelines (including automatic feature generation, model selection and hyperparameter tuning).
The general workflow using automated training contains below two steps.
1. create a AutoTSTrainer to train a TSPipeline, save it to file to use later or elsewhere if you wish.
2. use TSPipeline to do prediction, evaluation, and incremental fitting as well.
First, you need to initialize RayOnSpark before using auto training (i.e. AutoTSTrainer), and stop it after training finished. (Note RayOnSpark is not needed if you just use TSPipeline for inference, evaluation or incremental training.)
Step9: Then we initialize a AutoTSTrainer.
* dt_col
Step10: We can set some searching presets such as look_back which indicates the history time period we want to use for forecasting.
lookback can be an int which it is a fixed values, or can be a tuple to indicate the range for sampling.
Step11: We need to split the data frame into train, validation and test data frame before training. You can use train_val_test_split as an easy way to finish it.
Step12: Then we fit on train data and validation data.
You can use recipe to specify searching method as well as other searching presets such as stop criteria .etc. The GridRandomRecipe here is a recipe that combines grid search with random search to find the best set of parameters. For more details, please refer to bigdl document here.
Step13: We get a TSPipeline after training. Let's print the hyper paramters selected.
Note that past_seq_len is the lookback value that is automatically chosen
Step14: Use it to do prediction, evaluation or incremental fitting.
Step15: plot actual and prediction values for AvgRate KPI
Step16: Calculate mean square error and the symetric mean absolute percentage error.
Step17: You can save the pipeline to file and reload it to do incremental fitting or others.
Step18: You can stop RayOnSpark after auto training.
Step19: Next, we demonstrate how to do incremental fitting with your saved pipeline file.
First load saved pipeline file.
Step20: Then do incremental fitting with TSPipeline.fit().We use validation data frame as additional data for demonstration. You can use your new data frame.
Step21: predict and plot the result after incremental fitting.
Step22: Calculate mean square error and the symetric mean absolute percentage error. | Python Code:
def get_drop_dates_and_len(df, allow_missing_num=3):
Find missing values and get records to drop
missing_num = df.total.isnull().astype(int).groupby(df.total.notnull().astype(int).cumsum()).sum()
drop_missing_num = missing_num[missing_num > allow_missing_num]
drop_datetimes = df.iloc[drop_missing_num.index].index
drop_len = drop_missing_num.values
return drop_datetimes, drop_len
def rm_missing_weeks(start_dts, missing_lens, df):
Drop weeks that contains more than 3 consecutive missing values.
If consecutive missing values across weeks, we remove all the weeks.
for start_time, l in zip(start_dts, missing_lens):
start = start_time - pd.Timedelta(days=start_time.dayofweek)
start = start.replace(hour=0, minute=0, second=0)
start_week_end = start + pd.Timedelta(days=6)
start_week_end = start_week_end.replace(hour=22, minute=0, second=0)
end_time = start_time + l*pd.Timedelta(hours=2)
if start_week_end < end_time:
end = end_time + pd.Timedelta(days=6-end_time.dayofweek)
end = end.replace(hour=22, minute=0, second=0)
else:
end = start_week_end
df = df.drop(df[start:end].index)
return df
# plot the predicted values and actual values (for the test data)
def plot_result(test_df, pred_df, dt_col="datetime", value_col="AvgRate", look_back=1):
# target column of dataframe is "value"
# past sequence length is 50
pred_value = pred_df[value_col][:-1].values
true_value = test_df[value_col].values[look_back:]
fig, axs = plt.subplots(figsize=(12, 5))
axs.plot(pred_df[dt_col][:-1], pred_value, color='red', label='predicted values')
axs.plot(test_df[dt_col][look_back:], true_value, color='blue', label='actual values')
axs.set_title('the predicted values and actual values (for the test data)')
plt.xlabel(dt_col)
plt.xticks(rotation=45)
plt.ylabel(value_col)
plt.legend(loc='upper left')
plt.show()
Explanation: Network Traffic Forecasting with AutoTS
In telco, accurate forecast of KPIs (e.g. network traffic, utilizations, user experience, etc.) for communication networks ( 2G/3G/4G/5G/wired) can help predict network failures, allocate resource, or save energy.
In this notebook, we demostrate a reference use case where we use the network traffic KPI(s) in the past to predict traffic KPI(s) in the future. We demostrate how to use AutoTS in project Chronos to do time series forecasting in an automated and distributed way.
For demonstration, we use the publicly available network traffic data repository maintained by the WIDE project and in particular, the network traffic traces aggregated every 2 hours (i.e. AverageRate in Mbps/Gbps and Total Bytes) in year 2018 and 2019 at the transit link of WIDE to the upstream ISP (dataset link).
Helper functions
This section defines some helper functions to be used in the following procedures. You can refer to it later when they're used.
End of explanation
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
raw_df = pd.read_csv("data/data.csv")
Explanation: Download raw dataset and load into dataframe
Now we download the dataset and load it into a pandas dataframe. Steps are as below.
First, run the script get_data.sh to download the raw data. It will download the monthly aggregated traffic data in year 2018 and 2019 into data folder. The raw data contains aggregated network traffic (average MBPs and total bytes) as well as other metrics.
Second, run extract_data.sh to extract relavant traffic KPI's from raw data, i.e. AvgRate for average use rate, and total for total bytes. The script will extract the KPI's with timestamps into data/data.csv.
Finally, use pandas to load data/data.csv into a dataframe as shown below
End of explanation
raw_df.head()
Explanation: Below are some example records of the data
End of explanation
df = pd.DataFrame(pd.to_datetime(raw_df.StartTime))
# we can find 'AvgRate' is of two scales: 'Mbps' and 'Gbps'
raw_df.AvgRate.str[-4:].unique()
# Unify AvgRate value
df['AvgRate'] = raw_df.AvgRate.apply(lambda x: float(x[:-4]) if x.endswith("Mbps") else float(x[:-4]) * 1000)
df["total"] = raw_df["total"]
df.set_index("StartTime", inplace=True)
full_idx = pd.date_range(start=df.index.min(), end=df.index.max(), freq='2H')
df = df.reindex(full_idx)
print("no. of n/a values:")
print(df.isna().sum())
Explanation: Data pre-processing
Now we need to do data cleaning and preprocessing on the raw data. Note that this part could vary for different dataset.
For the network traffic data we're using, the processing contains 3 parts:
1. Convert string datetime to TimeStamp
2. Unify the measurement scale for AvgRate value - some uses Mbps, some uses Gbps
3. Handle missing data (fill or drop).
End of explanation
drop_dts, drop_len = get_drop_dates_and_len(df)
df = rm_missing_weeks(drop_dts, drop_len, df)
df.ffill(inplace=True)
# AutoTS requires input data frame with a datetime column
df.index.name = "datetime"
df = df.reset_index()
df.head()
df.describe()
Explanation: Here, we drop weeks with more than 3 consecutive missing values and fill other missing values remained.
End of explanation
ax = df.plot(y='AvgRate',figsize=(12,5), title="AvgRate of network traffic data")
ax = df.plot(y='total',figsize=(12,5), title="total bytes of network traffic data")
Explanation: Plot the data to see how the KPI's look like
End of explanation
# init RayOnSpark in local mode
from bigdl.dllib.nncontext import init_spark_on_local
from bigdl.orca.ray import OrcaRayContext
sc = init_spark_on_local(cores=4, spark_log_level="INFO")
ray_ctx = OrcaRayContext(sc=sc, object_store_memory="1g")
ray_ctx.init()
Explanation: Time series forecasting with AutoTS
AutoTS provides AutoML support for building end-to-end time series analysis pipelines (including automatic feature generation, model selection and hyperparameter tuning).
The general workflow using automated training contains below two steps.
1. create a AutoTSTrainer to train a TSPipeline, save it to file to use later or elsewhere if you wish.
2. use TSPipeline to do prediction, evaluation, and incremental fitting as well.
First, you need to initialize RayOnSpark before using auto training (i.e. AutoTSTrainer), and stop it after training finished. (Note RayOnSpark is not needed if you just use TSPipeline for inference, evaluation or incremental training.)
End of explanation
from bigdl.chronos.autots.deprecated.forecast import AutoTSTrainer
trainer = AutoTSTrainer(dt_col="datetime",
target_col="AvgRate",
horizon=1,
extra_features_col=None)
Explanation: Then we initialize a AutoTSTrainer.
* dt_col: the column specifying datetime.
* target_col: target column to predict. Here, we take AvgRate KPI as an example.
* horizon : num of steps to look forward.
* extra_feature_col: a list of columns which are also included in input data frame as features except target column.
End of explanation
# look back in range from one week to 3 days to predict the next 2h.
look_back = (36, 84)
Explanation: We can set some searching presets such as look_back which indicates the history time period we want to use for forecasting.
lookback can be an int which it is a fixed values, or can be a tuple to indicate the range for sampling.
End of explanation
from bigdl.chronos.autots.deprecated.preprocessing.utils import train_val_test_split
train_df, val_df, test_df = train_val_test_split(df,
val_ratio=0.1,
test_ratio=0.1,
look_back=look_back[0])
Explanation: We need to split the data frame into train, validation and test data frame before training. You can use train_val_test_split as an easy way to finish it.
End of explanation
from bigdl.chronos.autots.deprecated.config.recipe import LSTMGridRandomRecipe
%%time
ts_pipeline = trainer.fit(train_df, val_df,
recipe=LSTMGridRandomRecipe(
num_rand_samples=1,
epochs=1,
look_back=look_back,
batch_size=[64]),
metric="mse")
Explanation: Then we fit on train data and validation data.
You can use recipe to specify searching method as well as other searching presets such as stop criteria .etc. The GridRandomRecipe here is a recipe that combines grid search with random search to find the best set of parameters. For more details, please refer to bigdl document here.
End of explanation
ts_pipeline.internal.config
Explanation: We get a TSPipeline after training. Let's print the hyper paramters selected.
Note that past_seq_len is the lookback value that is automatically chosen
End of explanation
pred_df = ts_pipeline.predict(test_df)
Explanation: Use it to do prediction, evaluation or incremental fitting.
End of explanation
# plot the predicted values and actual values
plot_result(test_df, pred_df, dt_col="datetime", value_col="AvgRate", look_back=ts_pipeline.internal.config['past_seq_len'])
Explanation: plot actual and prediction values for AvgRate KPI
End of explanation
mse, smape = ts_pipeline.evaluate(test_df, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation
# save pipeline file
my_ppl_file_path = ts_pipeline.save("/tmp/saved_pipeline/my.ppl")
Explanation: You can save the pipeline to file and reload it to do incremental fitting or others.
End of explanation
# stop
ray_ctx.stop()
sc.stop()
Explanation: You can stop RayOnSpark after auto training.
End of explanation
# load file
from bigdl.chronos.autots.deprecated.forecast import TSPipeline
loaded_ppl = TSPipeline.load(my_ppl_file_path)
Explanation: Next, we demonstrate how to do incremental fitting with your saved pipeline file.
First load saved pipeline file.
End of explanation
# we use validation data frame as additional data for demonstration.
loaded_ppl.fit(val_df, epochs=2)
Explanation: Then do incremental fitting with TSPipeline.fit().We use validation data frame as additional data for demonstration. You can use your new data frame.
End of explanation
# predict results of test_df
new_pred_df = loaded_ppl.predict(test_df)
plot_result(test_df, new_pred_df, look_back=loaded_ppl.internal.config['past_seq_len'])
Explanation: predict and plot the result after incremental fitting.
End of explanation
# evaluate test_df
mse, smape = loaded_ppl.evaluate(test_df, metrics=["mse", "smape"])
print("Evaluate: the mean square error is", mse)
print("Evaluate: the smape value is", smape)
Explanation: Calculate mean square error and the symetric mean absolute percentage error.
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.