Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
9,000 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial investigation of drug-gene networks
Brin Rosenthal ([email protected])
April 15, 2016
Prototype for tool to be added to Search
Goals
Step1: Test drug_gene_heatprop module
This section runs the inferred drug heat propagation module from a list of seed genes, and returns a list of genes ranked by their 'heat', or proximity to the seed gene set. These are the genes which we think will be most related to the seed genes.
For a more detailed, step by step description of the process, continue reading past this section.
Step2: More detailed description of methods below...
Load the drug bank database, and create a network out of it
Network is bipartite with types
Step3: What is this drug-gene graph like?
how sparse is it?
Are there genes/drugs that have many connections?
Step4: But we probably want to focus on within-cluster interactions, instead of the whole graph
Download a sample cluster from geneli.st (Adrenocortical carcinoma cluster 250250)
Extract a subnetwork from total drug-gene network containing only the genes from this cluster, and associated drugs
Plot this subnetwork
Step5: Above, we plot the drug-gene interaction network for our sample cluster
We're showing only the genes that have associated drugs, in the cluster
This is one option for exploring drug-gene interaction space, as the sparseness of drugs/genes per cluster allows for easy visualization
Another option... heat propagation
Run heat propagation from seed nodes on a sample cluster, to prioritize genes (and their associated drugs) similar to seed node set
Some questions to resolve
Step6: First let's plot the focal cluster of interest (Adrenocortical carcinoma cluster 250250)
Step7: Now we will convert the cluster correlation matrix back to network form
Step8: Now let's look up the drugs associated with these genes to see if there are any good candidates | Python Code:
# import some useful packages
import numpy as np
import matplotlib.pyplot as plt
import seaborn
import networkx as nx
import pandas as pd
import random
import json
# latex rendering of text in graphs
import matplotlib as mpl
mpl.rc('text', usetex = False)
mpl.rc('font', family = 'serif')
% matplotlib inline
Explanation: Initial investigation of drug-gene networks
Brin Rosenthal ([email protected])
April 15, 2016
Prototype for tool to be added to Search
Goals:
Input a gene list
Use the DrugBank database to suggest drugs related to genes in input list
Note: data files and code for this notebook may be found in the 'data' and 'source' directories
End of explanation
# load the module
import sys
sys.path.append('../source')
import drug_gene_heatprop
import imp
imp.reload(drug_gene_heatprop)
path_to_DB_file = '../drugbank.0.json.new' # set path to drug bank file
path_to_cluster_file = 'sample_matrix.csv' # set path to cluster file
seed_genes = ['LETM1','RPL3','GRK4','RWDD4A'] # set seed genes (must be in cluster)
gene_drug_df = drug_gene_heatprop.drug_gene_heatprop(seed_genes,path_to_DB_file,path_to_cluster_file,
plot_flag=True)
gene_drug_df.head(25)
Explanation: Test drug_gene_heatprop module
This section runs the inferred drug heat propagation module from a list of seed genes, and returns a list of genes ranked by their 'heat', or proximity to the seed gene set. These are the genes which we think will be most related to the seed genes.
For a more detailed, step by step description of the process, continue reading past this section.
End of explanation
def load_DB_data(fname):
'''
Load and process the drug bank data
'''
with open(fname, 'r') as f:
read_data = f.read()
f.closed
si = read_data.find('\'\n{\n\t"source":')
sf = read_data.find('\ncurl')
DBdict = dict()
# fill in DBdict
while si > 0:
db_temp = json.loads(read_data[si+2:sf-2])
DBdict[db_temp['drugbank_id']]=db_temp
# update read_data
read_data = read_data[sf+10:]
si = read_data.find('\'\n{\n\t"source":')
sf = read_data.find('\ncurl')
return DBdict
DBdict = load_DB_data('/Users/brin/Documents/DrugBank/drugbank.0.json.new')
# make a network out of drug-gene interactions
DB_el = []
for d in DBdict.keys():
node_list = DBdict[d]['node_list']
for n in node_list:
DB_el.append((DBdict[d]['drugbank_id'],n['name']))
G_DB = nx.Graph()
G_DB.add_edges_from(DB_el)
gene_nodes,drug_nodes = nx.bipartite.sets(G_DB)
gene_nodes = list(gene_nodes)
drug_nodes = list(drug_nodes)
Explanation: More detailed description of methods below...
Load the drug bank database, and create a network out of it
Network is bipartite with types:
Drugs
Genes which are acted on by each drug
End of explanation
print('--> there are '+str(len(gene_nodes)) + ' genes with ' + str(len(drug_nodes)) + ' corresponding drugs')
DB_degree = pd.Series(G_DB.degree())
DB_degree.sort(ascending=False)
plt.figure(figsize=(18,5))
plt.bar(np.arange(70),DB_degree.head(70),width=.5)
tmp = plt.xticks(np.arange(70)+.4,list(DB_degree.head(70).index),rotation=90,fontsize=11)
plt.xlim(1,71)
plt.ylim(0,200)
plt.grid('off')
plt.ylabel('number of connections (degree)',fontsize=16)
Explanation: What is this drug-gene graph like?
how sparse is it?
Are there genes/drugs that have many connections?
End of explanation
# load a sample cluster for network visualization
sample_genes = pd.read_csv('/Users/brin/Documents/DrugBank/sample_cluster.csv',header=None)
sample_genes = list(sample_genes[0])
# also include neighbor genes
neighbor_genes = [nx.neighbors(G_DB,x) for x in sample_genes if x in G_DB.nodes()]
neighbor_genes = [val for sublist in neighbor_genes for val in sublist]
sub_genes = []
sub_genes.extend(sample_genes)
sub_genes.extend(neighbor_genes)
G_DB_sample = nx.subgraph(G_DB,sub_genes)
drug_nodes = list(np.intersect1d(neighbor_genes,G_DB.nodes()))
gene_nodes = list(np.intersect1d(sample_genes,G_DB.nodes()))
# return label positions offset by dx
def calc_pos_labels(pos,dx=.03):
# input node positions from nx.spring_layout()
pos_labels = dict()
for key in pos.keys():
pos_labels[key] = np.array([pos[key][0]+dx,pos[key][1]+dx])
return pos_labels
pos = nx.spring_layout(G_DB_sample,k=.27)
pos_labels = calc_pos_labels(pos)
plt.figure(figsize=(14,14))
nx.draw_networkx_nodes(G_DB_sample,pos=pos,nodelist = drug_nodes,node_shape='s',node_size=80,alpha=.7,label='drugs')
nx.draw_networkx_nodes(G_DB_sample,pos=pos,nodelist = gene_nodes,node_shape='o',node_size=80,node_color='blue',alpha=.7,label='genes')
nx.draw_networkx_edges(G_DB_sample,pos=pos,alpha=.5)
nx.draw_networkx_labels(G_DB_sample,pos=pos_labels,font_size=10)
plt.grid('off')
plt.legend(fontsize=12)
plt.title('Adrenocortical carcinoma cluster 250250',fontsize=16)
Explanation: But we probably want to focus on within-cluster interactions, instead of the whole graph
Download a sample cluster from geneli.st (Adrenocortical carcinoma cluster 250250)
Extract a subnetwork from total drug-gene network containing only the genes from this cluster, and associated drugs
Plot this subnetwork
End of explanation
sample_mat = pd.read_csv('/Users/brin/Documents/DrugBank/sample_matrix.csv',index_col=0)
print(sample_mat.head())
idx_to_node = dict(zip(range(len(sample_mat)),list(sample_mat.index)))
sample_mat = np.array(sample_mat)
sample_mat = sample_mat[::-1,0:-1] # reverse the indices for use in graph creation
Explanation: Above, we plot the drug-gene interaction network for our sample cluster
We're showing only the genes that have associated drugs, in the cluster
This is one option for exploring drug-gene interaction space, as the sparseness of drugs/genes per cluster allows for easy visualization
Another option... heat propagation
Run heat propagation from seed nodes on a sample cluster, to prioritize genes (and their associated drugs) similar to seed node set
Some questions to resolve:
- ### How should we handle negative edge weights?
- ### Should we return drugs associated with individual genes, or drugs most associated with total input gene list?
End of explanation
plt.figure(figsize=(7,7))
plt.matshow(sample_mat,cmap='bwr',vmin=-1,vmax=1,fignum=False)
plt.grid('off')
plt.title('Adrenocortical carcinoma cluster 250250',fontsize='16')
Explanation: First let's plot the focal cluster of interest (Adrenocortical carcinoma cluster 250250)
End of explanation
G_cluster = nx.Graph()
G_cluster = nx.from_numpy_matrix(np.abs(sample_mat))
G_cluster = nx.relabel_nodes(G_cluster,idx_to_node)
pos = nx.spring_layout(G_cluster,k=.4)
seed_genes = ['STIM2','USP46','FRYL','COQ2'] #['STIM2','USP46'] # input gene list here
plt.figure(figsize=(10,10))
nx.draw_networkx_nodes(G_cluster,pos=pos,node_size=20,alpha=.5,node_color='blue')
nx.draw_networkx_nodes(G_cluster,pos=pos,nodelist=seed_genes,node_size=50,alpha=.7,node_color='red',linewidths=2)
nx.draw_networkx_edges(G_cluster,pos=pos,alpha=.03)
plt.grid('off')
plt.title('Sample subnetwork: pre-heat propagation',fontsize=16)
Wprime = network_prop.normalized_adj_matrix(G_cluster,weighted=True)
Fnew = network_prop.network_propagation(G_cluster,Wprime,seed_genes)
plt.figure(figsize=(10,10))
nx.draw_networkx_edges(G_cluster,pos=pos,alpha=.03)
nx.draw_networkx_nodes(G_cluster,pos=pos,node_size=20,alpha=.8,node_color=Fnew[G_cluster.nodes()],cmap='jet',
vmin=0,vmax=.005)
nx.draw_networkx_nodes(G_cluster,pos=pos,nodelist=seed_genes,node_size=50,alpha=.7,node_color='red',linewidths=2)
plt.grid('off')
plt.title('Sample subnetwork: post-heat propagation',fontsize=16)
N = 50
Fnew.sort(ascending=False)
print('Top N hot genes: ')
Fnew.head(N)
# plot the hot subgraph in gene-gene space
G_cluster_sub = nx.subgraph(G_cluster,list(Fnew.head(N).index))
pos = nx.spring_layout(G_cluster_sub,k=.5)
plt.figure(figsize=(10,10))
nx.draw_networkx_nodes(G_cluster_sub,pos=pos,node_size=100,node_color=Fnew[G_cluster_sub.nodes()],cmap='jet',
vmin=0,vmax=.005)
nx.draw_networkx_edges(G_cluster_sub,pos=pos,alpha=.05)
pos_labels = calc_pos_labels(pos,dx=.05)
nx.draw_networkx_labels(G_cluster_sub,pos=pos_labels)
plt.grid('off')
plt.title('Sample cluster: hot subnetwork \n (genes most related to input list)', fontsize=16)
Explanation: Now we will convert the cluster correlation matrix back to network form
End of explanation
top_N_genes = list(Fnew.head(N).index)
top_N_genes = list(np.setdiff1d(top_N_genes,seed_genes)) # only keep non-seed genes
top_N_genes = Fnew[top_N_genes]
top_N_genes.sort(ascending=False)
top_N_genes = list(top_N_genes.index)
drug_candidates_list = seed_genes # build up a list of genes and drugs that may be related to input list
for g in top_N_genes:
if g in G_DB.nodes(): # check if g is in drugbank graph
drug_candidates_list.append(g)
drug_neighs_temp = list(nx.neighbors(G_DB,g))
drug_candidates_list.extend(drug_neighs_temp)
# make a subgraph of these drug/gene candidates
G_DB_sub = nx.subgraph(G_DB,drug_candidates_list)
# define drug_nodes and gene_nodes from the subgraph
drug_nodes = list(np.intersect1d(neighbor_genes,G_DB_sub.nodes()))
gene_nodes = list(np.intersect1d(sample_genes,G_DB_sub.nodes()))
plt.figure(figsize=(12,12))
pos = nx.spring_layout(G_DB_sub)
pos_labels = calc_pos_labels(pos,dx=.05)
nx.draw_networkx_nodes(G_DB_sub,pos=pos,nodelist=gene_nodes,node_size=100,alpha=.7,node_color='blue',label='genes')
nx.draw_networkx_nodes(G_DB_sub,pos=pos,nodelist=drug_nodes,node_size=100,alpha=.7,node_color='red',node_shape='s',label='drugs')
nx.draw_networkx_edges(G_DB_sub,pos=pos,alpha=.5)
nx.draw_networkx_labels(G_DB_sub,pos=pos_labels,font_color='black')
plt.grid('off')
ax_min = np.min(pos.values())-.3
ax_max = np.max(pos.values())+.3
plt.xlim(ax_min,ax_max)
plt.ylim(ax_min,ax_max)
plt.legend(fontsize=14)
#plt.axes().set_aspect('equal')
plt.title('Genes in hot subnetwork with associated drugs', fontsize=16)
Explanation: Now let's look up the drugs associated with these genes to see if there are any good candidates
End of explanation |
9,001 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https
Step2: MuJoCo
More detailed instructions in this tutorial.
Institutional MuJoCo license.
Step4: Machine-locked MuJoCo license.
Step5: RWRL
Step6: RL Unplugged
Step7: Imports
Step8: Data
Step10: Dataset and environment
Step12: D4PG learner
Step13: Training loop
Step14: Evaluation | Python Code:
!pip install dm-acme
!pip install dm-acme[reverb]
!pip install dm-acme[tf]
!pip install dm-sonnet
Explanation: Copyright 2020 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use
this file except in compliance with the License. You may obtain a copy of the
License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
RL Unplugged: Offline D4PG - RWRL
Guide to training an Acme D4PG agent on RWRL data.
<a href="https://colab.research.google.com/github/deepmind/deepmind-research/blob/master/rl_unplugged/rwrl_d4pg.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Installation
End of explanation
#@title Edit and run
mjkey =
REPLACE THIS LINE WITH YOUR MUJOCO LICENSE KEY
.strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL deps
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Fetch MuJoCo binaries from Roboti
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
# Install dm_control
!pip install dm_control
Explanation: MuJoCo
More detailed instructions in this tutorial.
Institutional MuJoCo license.
End of explanation
#@title Add your MuJoCo License and run
mjkey =
.strip()
mujoco_dir = "$HOME/.mujoco"
# Install OpenGL dependencies
!apt-get update && apt-get install -y --no-install-recommends \
libgl1-mesa-glx libosmesa6 libglew2.0
# Get MuJoCo binaries
!wget -q https://www.roboti.us/download/mujoco200_linux.zip -O mujoco.zip
!unzip -o -q mujoco.zip -d "$mujoco_dir"
# Copy over MuJoCo license
!echo "$mjkey" > "$mujoco_dir/mjkey.txt"
# Configure dm_control to use the OSMesa rendering backend
%env MUJOCO_GL=osmesa
# Install dm_control, including extra dependencies needed for the locomotion
# mazes.
!pip install dm_control[locomotion_mazes]
Explanation: Machine-locked MuJoCo license.
End of explanation
!git clone https://github.com/google-research/realworldrl_suite.git
!pip install realworldrl_suite/
Explanation: RWRL
End of explanation
!git clone https://github.com/deepmind/deepmind-research.git
%cd deepmind-research
Explanation: RL Unplugged
End of explanation
import collections
import copy
from typing import Mapping, Sequence
import acme
from acme import specs
from acme.agents.tf import actors
from acme.agents.tf import d4pg
from acme.tf import networks
from acme.tf import utils as tf2_utils
from acme.utils import loggers
from acme.wrappers import single_precision
from acme.tf import utils as tf2_utils
import numpy as np
import realworldrl_suite.environments as rwrl_envs
from reverb import replay_sample
import six
from rl_unplugged import rwrl
import sonnet as snt
import tensorflow as tf
Explanation: Imports
End of explanation
domain_name = 'cartpole' #@param
task_name = 'swingup' #@param
difficulty = 'easy' #@param
combined_challenge = 'easy' #@param
combined_challenge_str = str(combined_challenge).lower()
tmp_path = '/tmp/rwrl'
gs_path = f'gs://rl_unplugged/rwrl'
data_path = (f'combined_challenge_{combined_challenge_str}/{domain_name}/'
f'{task_name}/offline_rl_challenge_{difficulty}')
!mkdir -p {tmp_path}/{data_path}
!gsutil cp -r {gs_path}/{data_path}/* {tmp_path}/{data_path}
num_shards_str, = !ls {tmp_path}/{data_path}/* | wc -l
num_shards = int(num_shards_str)
Explanation: Data
End of explanation
#@title Auxiliary functions
def flatten_observation(observation):
Flattens multiple observation arrays into a single tensor.
Args:
observation: A mutable mapping from observation names to tensors.
Returns:
A flattened and concatenated observation array.
Raises:
ValueError: If `observation` is not a `collections.MutableMapping`.
if not isinstance(observation, collections.MutableMapping):
raise ValueError('Can only flatten dict-like observations.')
if isinstance(observation, collections.OrderedDict):
keys = six.iterkeys(observation)
else:
# Keep a consistent ordering for other mappings.
keys = sorted(six.iterkeys(observation))
observation_arrays = [tf.reshape(observation[key], [-1]) for key in keys]
return tf.concat(observation_arrays, 0)
def preprocess_fn(sample):
o_tm1, a_tm1, r_t, d_t, o_t = sample.data[:5]
o_tm1 = flatten_observation(o_tm1)
o_t = flatten_observation(o_t)
return replay_sample.ReplaySample(
info=sample.info, data=(o_tm1, a_tm1, r_t, d_t, o_t))
batch_size = 10 #@param
environment = rwrl_envs.load(
domain_name=domain_name,
task_name=f'realworld_{task_name}',
environment_kwargs=dict(log_safety_vars=False, flat_observation=True),
combined_challenge=combined_challenge)
environment = single_precision.SinglePrecisionWrapper(environment)
environment_spec = specs.make_environment_spec(environment)
act_spec = environment_spec.actions
obs_spec = environment_spec.observations
dataset = rwrl.dataset(
tmp_path,
combined_challenge=combined_challenge_str,
domain=domain_name,
task=task_name,
difficulty=difficulty,
num_shards=num_shards,
shuffle_buffer_size=10)
dataset = dataset.map(preprocess_fn).batch(batch_size)
Explanation: Dataset and environment
End of explanation
#@title Auxiliary functions
def make_networks(
action_spec: specs.BoundedArray,
hidden_size: int = 1024,
num_blocks: int = 4,
num_mixtures: int = 5,
vmin: float = -150.,
vmax: float = 150.,
num_atoms: int = 51,
):
Creates networks used by the agent.
num_dimensions = np.prod(action_spec.shape, dtype=int)
policy_network = snt.Sequential([
networks.LayerNormAndResidualMLP(
hidden_size=hidden_size, num_blocks=num_blocks),
# Converts the policy output into the same shape as the action spec.
snt.Linear(num_dimensions),
# Note that TanhToSpec applies tanh to the input.
networks.TanhToSpec(action_spec)
])
# The multiplexer concatenates the (maybe transformed) observations/actions.
critic_network = snt.Sequential([
networks.CriticMultiplexer(
critic_network=networks.LayerNormAndResidualMLP(
hidden_size=hidden_size, num_blocks=num_blocks),
observation_network=tf2_utils.batch_concat),
networks.DiscreteValuedHead(vmin, vmax, num_atoms)
])
return {
'policy': policy_network,
'critic': critic_network,
}
# Create the networks to optimize.
online_networks = make_networks(act_spec)
target_networks = copy.deepcopy(online_networks)
# Create variables.
tf2_utils.create_variables(online_networks['policy'], [obs_spec])
tf2_utils.create_variables(online_networks['critic'], [obs_spec, act_spec])
tf2_utils.create_variables(target_networks['policy'], [obs_spec])
tf2_utils.create_variables(target_networks['critic'], [obs_spec, act_spec])
# The learner updates the parameters (and initializes them).
learner = d4pg.D4PGLearner(
policy_network=online_networks['policy'],
critic_network=online_networks['critic'],
target_policy_network=target_networks['policy'],
target_critic_network=target_networks['critic'],
dataset=dataset,
discount=0.99,
target_update_period=100)
Explanation: D4PG learner
End of explanation
for _ in range(100):
learner.step()
Explanation: Training loop
End of explanation
# Create a logger.
logger = loggers.TerminalLogger(label='evaluation', time_delta=1.)
# Create an environment loop.
loop = acme.EnvironmentLoop(
environment=environment,
actor=actors.DeprecatedFeedForwardActor(online_networks['policy']),
logger=logger)
loop.run(5)
Explanation: Evaluation
End of explanation |
9,002 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatial Joins
A spatial join uses binary predicates
such as intersects and crosses to combine two GeoDataFrames based on the spatial relationship
between their geometries.
A common use case might be a spatial join between a point layer and a polygon layer where you want to retain the point geometries and grab the attributes of the intersecting polygons.
Types of spatial joins
We currently support the following methods of spatial joins. We refer to the left_df and right_df which are the correspond to the two dataframes passed in as args.
Left outer join
In a LEFT OUTER JOIN (how='left'), we keep all rows from the left and duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the right if they intersect and lose right rows that don't intersect. A left outer join implies that we are interested in retaining the geometries of the left.
This is equivalent to the PostGIS query
Step1: Joins
Step2: We're not limited to using the intersection binary predicate. Any of the Shapely geometry methods that return a Boolean can be used by specifying the op kwarg.
Step3: We can also conduct a nearest neighbour join with sjoin_nearest. | Python Code:
%matplotlib inline
from shapely.geometry import Point
from geopandas import datasets, GeoDataFrame, read_file
# NYC Boros
zippath = datasets.get_path('nybb')
polydf = read_file(zippath)
# Generate some points
b = [int(x) for x in polydf.total_bounds]
N = 8
pointdf = GeoDataFrame([
{'geometry': Point(x, y), 'value1': x + y, 'value2': x - y}
for x, y in zip(range(b[0], b[2], int((b[2] - b[0]) / N)),
range(b[1], b[3], int((b[3] - b[1]) / N)))])
# Make sure they're using the same projection reference
pointdf.crs = polydf.crs
pointdf
polydf
pointdf.plot()
polydf.plot()
Explanation: Spatial Joins
A spatial join uses binary predicates
such as intersects and crosses to combine two GeoDataFrames based on the spatial relationship
between their geometries.
A common use case might be a spatial join between a point layer and a polygon layer where you want to retain the point geometries and grab the attributes of the intersecting polygons.
Types of spatial joins
We currently support the following methods of spatial joins. We refer to the left_df and right_df which are the correspond to the two dataframes passed in as args.
Left outer join
In a LEFT OUTER JOIN (how='left'), we keep all rows from the left and duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the right if they intersect and lose right rows that don't intersect. A left outer join implies that we are interested in retaining the geometries of the left.
This is equivalent to the PostGIS query:
```
SELECT pts.geom, pts.id as ptid, polys.id as polyid
FROM pts
LEFT OUTER JOIN polys
ON ST_Intersects(pts.geom, polys.geom);
geom | ptid | polyid
--------------------------------------------+------+--------
010100000040A9FBF2D88AD03F349CD47D796CE9BF | 4 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 20
0101000000F0D88AA0E1A4EEBF7052F7E5B115E9BF | 2 | 20
0101000000818693BA2F8FF7BF4ADD97C75604E9BF | 1 |
(5 rows)
```
Right outer join
In a RIGHT OUTER JOIN (how='right'), we keep all rows from the right and duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the left if they intersect and lose left rows that don't intersect. A right outer join implies that we are interested in retaining the geometries of the right.
This is equivalent to the PostGIS query:
```
SELECT polys.geom, pts.id as ptid, polys.id as polyid
FROM pts
RIGHT OUTER JOIN polys
ON ST_Intersects(pts.geom, polys.geom);
geom | ptid | polyid
----------+------+--------
01...9BF | 4 | 10
01...9BF | 3 | 10
02...7BF | 3 | 20
02...7BF | 2 | 20
00...5BF | | 30
(5 rows)
```
Inner join
In an INNER JOIN (how='inner'), we keep rows from the right and left only where their binary predicate is True. We duplicate them if necessary to represent multiple hits between the two dataframes. We retain attributes of the right and left only if they intersect and lose all rows that do not. An inner join implies that we are interested in retaining the geometries of the left.
This is equivalent to the PostGIS query:
```
SELECT pts.geom, pts.id as ptid, polys.id as polyid
FROM pts
INNER JOIN polys
ON ST_Intersects(pts.geom, polys.geom);
geom | ptid | polyid
--------------------------------------------+------+--------
010100000040A9FBF2D88AD03F349CD47D796CE9BF | 4 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 10
010100000048EABE3CB622D8BFA8FBF2D88AA0E9BF | 3 | 20
0101000000F0D88AA0E1A4EEBF7052F7E5B115E9BF | 2 | 20
(4 rows)
```
Spatial Joins between two GeoDataFrames
Let's take a look at how we'd implement these using GeoPandas. First, load up the NYC test data into GeoDataFrames:
End of explanation
join_left_df = pointdf.sjoin(polydf, how="left")
join_left_df
# Note the NaNs where the point did not intersect a boro
join_right_df = pointdf.sjoin(polydf, how="right")
join_right_df
# Note Staten Island is repeated
join_inner_df = pointdf.sjoin(polydf, how="inner")
join_inner_df
# Note the lack of NaNs; dropped anything that didn't intersect
Explanation: Joins
End of explanation
pointdf.sjoin(polydf, how="left", predicate="within")
Explanation: We're not limited to using the intersection binary predicate. Any of the Shapely geometry methods that return a Boolean can be used by specifying the op kwarg.
End of explanation
pointdf.sjoin_nearest(polydf, how="left", distance_col="Distances")
# Note the optional Distances column with computed distances between each point
# and the nearest polydf geometry.
Explanation: We can also conduct a nearest neighbour join with sjoin_nearest.
End of explanation |
9,003 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EXP 1-Random
In this experiment we generate 1000 sequences each comprising 10 SDRs generated at random. We present these sequences to the TM with learning "on". Each training epoch starts by shuffling the 1000 sequences and presenting each of them to the TM. During the simulation we keep track of spike trains from all cells. We use this data to estimate pairwise correlations among cells.
Step1: Feed sequences to the TM
Step2: ISI analysis (with Poisson model too)
Step3: Raster Plots
Step4: Quick Accuracy Test
Step5: Elad Plot
Step6: Save TM
Step7: Analysis of input | Python Code:
import numpy as np
import random
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
from nupic.bindings.algorithms import TemporalMemory as TM
from htmresearch.support.neural_correlations_utils import *
uintType = "uint32"
random.seed(1)
symbolsPerSequence = 10
numSequences = 1000
epochs = 10
totalTS = epochs * numSequences * symbolsPerSequence
tm = TM(columnDimensions = (2048,),
cellsPerColumn=8,
initialPermanence=0.21,
connectedPermanence=0.3,
minThreshold=15,
maxNewSynapseCount=40,
permanenceIncrement=0.1,
permanenceDecrement=0.1,
activationThreshold=15,
predictedSegmentDecrement=0.01,
)
sparsity = 0.02
sparseCols = int(tm.numberOfColumns() * sparsity)
Explanation: EXP 1-Random
In this experiment we generate 1000 sequences each comprising 10 SDRs generated at random. We present these sequences to the TM with learning "on". Each training epoch starts by shuffling the 1000 sequences and presenting each of them to the TM. During the simulation we keep track of spike trains from all cells. We use this data to estimate pairwise correlations among cells.
End of explanation
# Create sequences
allSequences = []
for s in range(numSequences):
sequence = generateRandomSequence(symbolsPerSequence, tm.numberOfColumns(), sparsity)
allSequences.append(sequence)
spikeTrains = np.zeros((tm.numberOfCells(), totalTS), dtype = "uint32")
columnUsage = np.zeros(tm.numberOfColumns(), dtype="uint32")
spikeCount = np.zeros(totalTS, dtype="uint32")
ts = 0
entropyX = []
entropyY = []
negPCCX_cells = []
negPCCY_cells = []
numSpikesX = []
numSpikesY = []
numSpikes = 0
negPCCX_cols = []
negPCCY_cols = []
traceX = []
traceY = []
# Randomly generate the indices of the columns to keep track during simulation time
colIndicesLarge = np.random.permutation(tm.numberOfColumns())[0:125] # keep track of 125 columns = 1000 cells
for epoch in range(epochs):
# shuffle sequences
print ""
print "Epoch: " + str(epoch)
seqIndices = np.random.permutation(np.arange(numSequences))
for s in range(numSequences):
if s > 0 and s % 100 == 0:
print str(s) + " sequences processed"
for symbol in range(symbolsPerSequence):
tm.compute(allSequences[seqIndices[s]][symbol], learn=True)
for cell in tm.getActiveCells():
spikeTrains[cell, ts] = 1
numSpikes += 1
spikeCount[ts] += 1
# Obtain active columns:
activeColumnsIndices = [tm.columnForCell(i) for i in tm.getActiveCells()]
currentColumns = [1 if i in activeColumnsIndices else 0 for i in range(tm.numberOfColumns())]
for col in np.nonzero(currentColumns)[0]:
columnUsage[col] += 1
if ts > 0 and ts % int(totalTS * 0.1) == 0:
numSpikesX.append(ts)
numSpikesY.append(numSpikes)
numSpikes = 0
#print "++ Analyzing correlations (cells at random) ++"
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), ts, 1000)
(corrMatrix, numNegPCC) = computePWCorrelations(subSpikeTrains, removeAutoCorr=True)
negPCCX_cells.append(ts)
negPCCY_cells.append(numNegPCC)
traceX.append(ts)
#traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.5))
#traceY.append(np.std(corrMatrix))
#traceY.append(sum(1 for i in corrMatrix.ravel() if i > -0.05 and i < 0.1))
traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.0))
bins = 300
plt.hist(corrMatrix.ravel(), bins, alpha=0.5)
plt.xlim(-0.05,0.1)
plt.xlabel("PCC")
plt.ylabel("Frequency")
plt.savefig("cellsHist_" + str(ts))
plt.close()
entropyX.append(ts)
entropyY.append(computeEntropy(subSpikeTrains))
#print "++ Analyzing correlations (whole columns) ++"
### First the LARGE subsample of columns:
subSpikeTrains = subSampleWholeColumn(spikeTrains, colIndicesLarge, tm.getCellsPerColumn(), ts, 1000)
(corrMatrix, numNegPCC) = computePWCorrelationsWithinCol(subSpikeTrains, True, tm.getCellsPerColumn())
negPCCX_cols.append(ts)
negPCCY_cols.append(numNegPCC)
#print "++ Generating histogram ++"
plt.hist(corrMatrix.ravel(), alpha=0.5)
plt.xlabel("PCC")
plt.ylabel("Frequency")
plt.savefig("colsHist_" + str(ts))
plt.close()
ts += 1
print "*** DONE ***"
plt.plot(traceX, traceY)
plt.xlabel("Time")
plt.ylabel("Positive PCC Count")
plt.savefig("positivePCCTrace")
plt.close()
sparsityTraceX = []
sparsityTraceY = []
for i in range(totalTS - 1000):
sparsityTraceX.append(i)
sparsityTraceY.append(np.mean(spikeCount[i:1000 + i]) / tm.numberOfCells())
plt.plot(sparsityTraceX, sparsityTraceY)
plt.xlabel("Time")
plt.ylabel("Sparsity")
plt.savefig("sparsityTrace")
plt.close()
# plot trace of negative PCCs
plt.plot(negPCCX_cells, negPCCY_cells)
plt.xlabel("Time")
plt.ylabel("Negative PCC Count")
plt.savefig("negPCCTrace_cells")
plt.close()
plt.plot(negPCCX_cols, negPCCY_cols)
plt.xlabel("Time")
plt.ylabel("Negative PCC Count")
plt.savefig("negPCCTrace_cols")
plt.close()
plt.plot(numSpikesX, numSpikesY)
plt.xlabel("Time")
plt.ylabel("Num Spikes")
plt.savefig("numSpikesTrace")
plt.close()
# plot entropy
plt.plot(entropyX, entropyY)
plt.xlabel("Time")
plt.ylabel("Entropy")
plt.savefig("entropyTM")
plt.close()
plt.hist(columnUsage)
plt.xlabel("Number of times active")
plt.ylabel("Number of columns")
plt.savefig("columnUsage")
plt.close()
Explanation: Feed sequences to the TM
End of explanation
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), 0, 0)
isi = computeISI(subSpikeTrains)
# Print ISI distribution of TM
bins = 100
plt.hist(isi, bins)
plt.xlim(0,1000)
# plt.xlim(89500,92000)
plt.xlabel("ISI")
plt.ylabel("Frequency")
plt.savefig("isiTM")
plt.close()
print np.mean(isi)
print np.std(isi)
print np.std(isi)/np.mean(isi)
# Generate spike distribution
spikeCount = []
for cell in range(np.shape(subSpikeTrains)[0]):
spikeCount.append(np.count_nonzero(subSpikeTrains[cell,:]))
bins = 25
plt.hist(spikeCount, bins)
plt.xlabel("Spike Count")
plt.ylabel("Number of cells")
plt.savefig("spikesHist_TM")
plt.close()
#firingRate = 18
firingRate = np.mean(subSpikeTrains) * 1000
print "firing rate: " + str(firingRate)
pSpikeTrain = poissonSpikeGenerator(int(firingRate),np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0])
isi = computeISI(pSpikeTrain)
# Print ISI distribution of Poisson model
#bins = np.linspace(np.min(isi), np.max(isi), 50)
bins = 100
plt.hist(isi, bins)
plt.xlim(0,600)
# plt.xlim(89500,92000)
plt.xlabel("ISI")
plt.ylabel("Frequency")
plt.savefig("isiPOI")
plt.close()
print np.mean(isi)
print np.std(isi)
print np.std(isi)/np.mean(isi)
# Generate spike distribution
spikeCount = []
for cell in range(np.shape(pSpikeTrain)[0]):
spikeCount.append(np.count_nonzero(pSpikeTrain[cell,:]))
bins = 25
plt.hist(spikeCount, bins)
plt.xlabel("Spike Count")
plt.ylabel("Number of cells")
plt.savefig("spikesHist_POI")
plt.close()
Explanation: ISI analysis (with Poisson model too)
End of explanation
subSpikeTrains = subSample(spikeTrains, 100, tm.numberOfCells(), -1, 1000)
rasterPlot(subSpikeTrains, "TM")
pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0])
rasterPlot(pSpikeTrain, "Poisson")
Explanation: Raster Plots
End of explanation
simpleAccuracyTest("random", tm, allSequences)
Explanation: Quick Accuracy Test
End of explanation
# Sample from both TM_SpikeTrains and Poisson_SpikeTrains. 10 cells for 1000 (?) timesteps
wordLength = 10
firingRate = np.mean(subSpikeTrains) * 1000
# generate all 2^N strings:
binaryStrings = list(itertools.product([0, 1], repeat=wordLength))
trials = 10
x = [] #observed
y = [] #predicted by random model
for t in range(trials):
print "Trial: " + str(t)
# sample from spike trains
subSpikeTrains = subSample(spikeTrains, wordLength, tm.numberOfCells(), 0, 0)
pSpikeTrain = poissonSpikeGenerator(firingRate,np.shape(subSpikeTrains)[1],np.shape(subSpikeTrains)[0])
for i in range(2**wordLength):
if i == 0:
continue
# if i % 100 == 0:
# print str(i) + " words processed"
binaryWord = np.array(binaryStrings[i], dtype="uint32")
x.append(countInSample(binaryWord, subSpikeTrains))
y.append(countInSample(binaryWord, pSpikeTrain))
# print "**All words processed**"
# print ""
print "*** DONE ***"
plt.loglog(x, y, 'bo',basex=10)
plt.xlabel("Observed")
plt.ylabel("Predicted")
plt.plot(x,x,'k-')
plt.xlim(0,np.max(x))
plt.savefig("EladPlot")
plt.close()
Explanation: Elad Plot
End of explanation
saveTM(tm)
# to load the TM back from the file do:
with open('tm.nta', 'rb') as f:
proto2 = TemporalMemoryProto_capnp.TemporalMemoryProto.read(f, traversal_limit_in_words=2**61)
tm = TM.read(proto2)
Explanation: Save TM
End of explanation
overlapMatrix = inputAnalysis(allSequences, "random", tm.numberOfColumns())
# show heatmap of overlap matrix
plt.imshow(overlapMatrix, cmap='spectral', interpolation='nearest')
cb = plt.colorbar()
cb.set_label('Overlap Score')
plt.savefig("overlapScore_heatmap")
plt.close()
# plt.show()
# generate histogram
bins = 60
(n, bins, patches) = plt.hist(overlapMatrix.ravel(), bins, alpha=0.5)
plt.xlabel("Overlap Score")
plt.ylabel("Frequency")
plt.savefig("overlapScore_hist")
plt.xlim(0,0.15)
plt.xlabel("Overlap Score")
plt.ylabel("Frequency")
plt.savefig("overlapScore_hist_ZOOM")
plt.close()
x = []
trials = 1000
for t in range(trials):
pSpikeTrain = poissonSpikeGenerator(18,1000,1)
x.append(np.count_nonzero(pSpikeTrain))
bins = 25
plt.hist(x, bins)
plt.savefig("test_spikePOI")
plt.close()
Explanation: Analysis of input
End of explanation |
9,004 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading the data
Step1: Build clustering model
Here we build a kmeans model , and select the "optimal" of clusters.
Here we see that the optimal number of clusters is 2.
Step2: Build the optimal model and apply it
Step3: Cluster Profiles
Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases.
As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question).
The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars) | Python Code:
def loadContributions(file, withsexe=False):
contributions = pd.read_json(path_or_buf=file, orient="columns")
rows = [];
rindex = [];
for i in range(0, contributions.shape[0]):
row = {};
row['id'] = contributions['id'][i]
rindex.append(contributions['id'][i])
if (withsexe):
if (contributions['sexe'][i] == 'Homme'):
row['sexe'] = 0
else:
row['sexe'] = 1
for question in contributions['questions'][i]:
if (question.get('Reponse')) and (question['texte'][0:5] != 'Savez') and (question['titreQuestion'][-2:] != '10'):
row[question['titreQuestion']+' : '+question['texte']] = 1
for criteres in question.get('Reponse'):
# print(criteres['critere'].keys())
row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1
rows.append(row)
df = pd.DataFrame(data=rows)
df.fillna(0, inplace=True)
return df
df = loadContributions('../data/EGALITE2.brut.json', True)
df.fillna(0, inplace=True)
df.index = df['id']
#df.to_csv('consultation_an.csv', format='%d')
#df.columns = ['Q_' + str(col+1) for col in range(len(df.columns) - 2)] + ['id' , 'sexe']
df.head()
Explanation: Reading the data
End of explanation
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np
X = df.drop('id', axis=1).values
def train_kmeans(nb_clusters, X):
kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)
return kmeans
#print(kmeans.predict(X))
#kmeans.cluster_centers_
def select_nb_clusters():
perfs = {};
for nbclust in range(2,10):
kmeans_model = train_kmeans(nbclust, X);
labels = kmeans_model.labels_
# from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# we are in an unsupervised model. cannot get better!
# perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);
perfs[nbclust] = metrics.silhouette_score(X, labels);
print(perfs);
return perfs;
df['clusterindex'] = train_kmeans(4, X).predict(X)
#df
perfs = select_nb_clusters();
# result :
# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}
optimal_nb_clusters = max(perfs, key=perfs.get);
print("optimal_nb_clusters" , optimal_nb_clusters);
Explanation: Build clustering model
Here we build a kmeans model , and select the "optimal" of clusters.
Here we see that the optimal number of clusters is 2.
End of explanation
km_model = train_kmeans(optimal_nb_clusters, X);
df['clusterindex'] = km_model.predict(X)
lGroupBy = df.groupby(['clusterindex']).mean();
cluster_profile_counts = df.groupby(['clusterindex']).count();
cluster_profile_means = df.groupby(['clusterindex']).mean();
global_counts = df.count()
global_means = df.mean()
cluster_profile_counts.head(10)
df_profiles = pd.DataFrame();
nbclusters = cluster_profile_means.shape[0]
df_profiles['clusterindex'] = range(nbclusters)
for col in cluster_profile_means.columns:
if(col != "clusterindex"):
df_profiles[col] = np.zeros(nbclusters)
for cluster in range(nbclusters):
df_profiles[col][cluster] = cluster_profile_means[col][cluster]
# row.append(df[col].mean());
df_profiles.head()
#print(df_profiles.columns)
intereseting_columns = {};
for col in df_profiles.columns:
if(col != "clusterindex"):
global_mean = df[col].mean()
diff_means_global = abs(df_profiles[col] - global_mean). max();
# print(col , diff_means_global)
if(diff_means_global > 0.05):
intereseting_columns[col] = True
#print(intereseting_columns)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
Explanation: Build the optimal model and apply it
End of explanation
interesting = list(intereseting_columns.keys())
df_profiles_sorted = df_profiles[interesting].sort_index(axis=1)
df_profiles_sorted.plot.bar(figsize =(1, 1))
df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)
df_profiles_sorted.T
#df_profiles.sort_index(axis=1).T
Explanation: Cluster Profiles
Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases.
As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question).
The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars)
End of explanation |
9,005 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cell Magic Tutorial
Interactions with MLDB occurs via a REST API. Interacting with a REST API over HTTP from a Notebook interface can be a little bit laborious if you're using a general-purpose Python library like requests directly, so MLDB comes with a Python library called pymldb to ease the pain.
pymldb does this in three ways
Step1: And then we'll ask it for some help
Step2: The most basic way in which the %mldb magic can help us with MLDB's REST API is by allowing us to type natural-feeling REST commands, like this one, which will list all of the available dataset types
Step3: You can use similar syntax to run PUT, POST and DELETE queries as well.
Advanced Magic
The %mldb magic system also includes syntax to do more advanced operations like loading and querying data. Let's load the dataset from the Predicting Titanic Survival demo with a single command (after deleting it first if it's already loaded)
Step4: And now let's run an SQL query on it
Step5: We can get the results out as a Pandas DataFrame just as easily
Step6: Server-Side Python Magic
Python code which is executed in a normal Notebook cell runs within the Notebook Python interpreter. MLDB supports the sending of Python scripts via HTTP for execution within its own in-process Python interpreter. Server-side python code gets access to a high-performance version of the REST API which bypasses HTTP, via an mldb.perform() function.
There's an %mldb magic command for running server-side Python code, from the comfort of your Notebook | Python Code:
%reload_ext pymldb
Explanation: Cell Magic Tutorial
Interactions with MLDB occurs via a REST API. Interacting with a REST API over HTTP from a Notebook interface can be a little bit laborious if you're using a general-purpose Python library like requests directly, so MLDB comes with a Python library called pymldb to ease the pain.
pymldb does this in three ways:
the %mldb magics: these are Jupyter line- and cell-magic commands which allow you to make raw HTTP calls to MLDB, and also provides some higher-level functions. This tutorial shows you how to use them.
the Python Resource class: this is simple class which wraps the requests library so as to make HTTP calls to the MLDB API more friendly in a Notebook environment. Check out the Resource Wrapper Tutorial for more info on the Resource class.
the Python BatFrame class: this is a class that behaves like the Pandas DataFrame but offloads computation to the server via HTTP calls. Check out the BatFrame Tutorial for more info on the BatFrame.
The %mldb Magic System
Basic Magic
We'll start by initializing the %mldb magic system
End of explanation
%mldb help
Explanation: And then we'll ask it for some help
End of explanation
%mldb GET /v1/types/datasets
Explanation: The most basic way in which the %mldb magic can help us with MLDB's REST API is by allowing us to type natural-feeling REST commands, like this one, which will list all of the available dataset types:
End of explanation
%mldb DELETE /v1/datasets/titanic
%mldb loadcsv titanic https://raw.githubusercontent.com/datacratic/mldb-pytanic-plugin/master/titanic_train.csv
Explanation: You can use similar syntax to run PUT, POST and DELETE queries as well.
Advanced Magic
The %mldb magic system also includes syntax to do more advanced operations like loading and querying data. Let's load the dataset from the Predicting Titanic Survival demo with a single command (after deleting it first if it's already loaded):
End of explanation
%mldb query select * from titanic limit 5
Explanation: And now let's run an SQL query on it:
End of explanation
df = %mldb query select * from titanic
type(df)
Explanation: We can get the results out as a Pandas DataFrame just as easily:
End of explanation
%%mldb py
# this code will run on the server!
print mldb.perform("GET", "/v1/types/datasets", [], {})["response"]
Explanation: Server-Side Python Magic
Python code which is executed in a normal Notebook cell runs within the Notebook Python interpreter. MLDB supports the sending of Python scripts via HTTP for execution within its own in-process Python interpreter. Server-side python code gets access to a high-performance version of the REST API which bypasses HTTP, via an mldb.perform() function.
There's an %mldb magic command for running server-side Python code, from the comfort of your Notebook:
End of explanation |
9,006 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression Week 3
Step1: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x
Step2: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
Step3: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree
Step4: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call
Step5: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
Step6: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
Step7: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
Step8: NOTE
Step9: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial? | Python Code:
import pandas as pd
import numpy as np
Explanation: Regression Week 3: Assessing Fit (polynomial regression)
In this notebook you will compare different regression models in order to assess which model fits best. We will be using polynomial regression as a means to examine this topic. In particular you will:
* Write a function to take an SArray and a degree and return an SFrame where each column is the SArray to a polynomial value up to the total degree e.g. degree = 3 then column 1 is the SArray column 2 is the SArray squared and column 3 is the SArray cubed
* Use matplotlib to visualize polynomial regressions
* Use matplotlib to visualize the same polynomial degree on different subsets of the data
* Use a validation set to select a polynomial degree
* Assess the final fit using test data
We will continue to use the House data from previous notebooks.
Fire up graphlab create
End of explanation
tmp = np.array([1., 2., 3.])
tmp_cubed = tmp**3
print(tmp)
print(tmp_cubed)
Explanation: Next we're going to write a polynomial function that takes an SArray and a maximal degree and returns an SFrame with columns containing the SArray to all the powers up to the maximal degree.
The easiest way to apply a power to an SArray is to use the .apply() and lambda x: functions.
For example to take the example array and compute the third power we can do as follows: (note running this cell the first time may take longer than expected since it loads graphlab)
End of explanation
ex_dataframe = pd.DataFrame()
ex_dataframe['power_1'] = tmp
print(ex_dataframe)
Explanation: We can create an empty SFrame using graphlab.SFrame() and then add any columns to it with ex_sframe['column_name'] = value. For example we create an empty SFrame and make the column 'power_1' to be the first power of tmp (i.e. tmp itself).
End of explanation
def polynomial_sframe(feature, degree):
# assume that degree >= 1
# initialize the SFrame:
poly_sframe =
# and set poly_sframe['power_1'] equal to the passed feature
# first check if degree > 1
if degree > 1:
# then loop over the remaining degrees:
# range usually starts at 0 and stops at the endpoint-1. We want it to start at 2 and stop at degree
for power in range(2, degree+1):
# first we'll give the column a name:
name = 'power_' + str(power)
# then assign poly_sframe[name] to the appropriate power of feature
return poly_sframe
Explanation: Polynomial_sframe function
Using the hints above complete the following function to create an SFrame consisting of the powers of an SArray up to a specific degree:
End of explanation
print polynomial_sframe(tmp, 3)
Explanation: To test your function consider the smaller tmp variable and what you would expect the outcome of the following call:
End of explanation
sales = graphlab.SFrame('kc_house_data.gl/')
Explanation: Visualizing polynomial regression
Let's use matplotlib to visualize what a polynomial regression looks like on some real data.
End of explanation
sales = sales.sort(['sqft_living', 'price'])
Explanation: As in Week 3, we will use the sqft_living variable. For plotting purposes (connecting the dots), you'll need to sort by the values of sqft_living. For houses with identical square footage, we break the tie by their prices.
End of explanation
poly1_data = polynomial_sframe(sales['sqft_living'], 1)
poly1_data['price'] = sales['price'] # add price to the data since it's the target
Explanation: Let's start with a degree 1 polynomial using 'sqft_living' (i.e. a line) to predict 'price' and plot what it looks like.
End of explanation
model1 = graphlab.linear_regression.create(poly1_data, target = 'price', features = ['power_1'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(poly1_data['power_1'],poly1_data['price'],'.',
poly1_data['power_1'], model1.predict(poly1_data),'-')
Explanation: NOTE: for all the models in this notebook use validation_set = None to ensure that all results are consistent across users.
End of explanation
poly2_data = polynomial_sframe(sales['sqft_living'], 2)
my_features = poly2_data.column_names() # get the name of the features
poly2_data['price'] = sales['price'] # add price to the data since it's the target
model2 = graphlab.linear_regression.create(poly2_data, target = 'price', features = my_features, validation_set = None)
model2.get("coefficients")
plt.plot(poly2_data['power_1'],poly2_data['price'],'.',
poly2_data['power_1'], model2.predict(poly2_data),'-')
Explanation: Let's unpack that plt.plot() command. The first pair of SArrays we passed are the 1st power of sqft and the actual price we then ask it to print these as dots '.'. The next pair we pass is the 1st power of sqft and the predicted values from the linear model. We ask these to be plotted as a line '-'.
We can see, not surprisingly, that the predicted values all fall on a line, specifically the one with slope 280 and intercept -43579. What if we wanted to plot a second degree polynomial?
End of explanation |
9,007 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced Topics
Step1: Using scalar aggregates in filters
Step2: We could always compute some aggregate value from the table and use that in another expression, or we can use a data-derived aggregate in the filter. Take the average of a column for example
Step3: You can use this expression as a substitute for a scalar value in a filter, and the execution engine will combine everything into a single query rather than having to access Impala multiple times
Step4: Conditional aggregates
Suppose that we wish to filter using an aggregate computed conditional on some other expressions holding true. Using the TPC-H datasets, suppose that we want to filter customers based on the following criteria
Step5: In this particular case, filtering based on the conditional average o_totalprice by region requires creating a table view (similar to the self-join examples from earlier) that can be treated as a distinct table entity in the expression. This would not be required if we were computing a conditional statistic from some other table. So this is a little more complicated than some other cases would be
Step6: Once you've done this, you can use the conditional average in a filter expression
Step7: By looking at the table sizes before and after applying the filter you can see the relative size of the subset taken.
Step8: Or even group by year and compare before and after
Step9: "Existence" filters
Some filtering involves checking for the existence of a particular value in a column of another table, or amount the results of some value expression. This is common in many-to-many relationships, and can be performed in numerous different ways, but it's nice to be able to express it with a single concise statement and let Ibis compute it optimally.
Here's some examples
Step10: We introduce the any reduction
Step11: This is now a valid filter
Step12: For the second example, in which we want to find customers not having any open urgent orders, we write down the condition that they do have some first
Step13: Now, we can negate this condition and use it as a filter | Python Code:
import ibis
import os
hdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
hdfs_client=hdfs)
ibis.options.interactive = True
Explanation: Advanced Topics: Additional Filtering
The filtering examples we've shown to this point have been pretty simple, either comparisons between columns or fixed values, or set filter functions like isin and notin.
Ibis supports a number of richer analytical filters that can involve one or more of:
Aggregates computed from the same or other tables
Conditional aggregates (in SQL-speak these are similar to "correlated subqueries")
"Existence" set filters (equivalent to the SQL EXISTS and NOT EXISTS keywords)
Setup
End of explanation
table = con.table('functional_alltypes')
table.limit(5)
Explanation: Using scalar aggregates in filters
End of explanation
table.double_col.mean()
Explanation: We could always compute some aggregate value from the table and use that in another expression, or we can use a data-derived aggregate in the filter. Take the average of a column for example:
End of explanation
cond = table.bigint_col > table.double_col.mean()
expr = table[cond & table.bool_col].limit(5)
expr
Explanation: You can use this expression as a substitute for a scalar value in a filter, and the execution engine will combine everything into a single query rather than having to access Impala multiple times:
End of explanation
region = con.table('tpch_region')
nation = con.table('tpch_nation')
customer = con.table('tpch_customer')
orders = con.table('tpch_orders')
fields_of_interest = [customer,
region.r_name.name('region'),
orders.o_totalprice,
orders.o_orderdate.cast('timestamp').name('odate')]
tpch = (region.join(nation, region.r_regionkey == nation.n_regionkey)
.join(customer, customer.c_nationkey == nation.n_nationkey)
.join(orders, orders.o_custkey == customer.c_custkey)
[fields_of_interest])
tpch.limit(5)
Explanation: Conditional aggregates
Suppose that we wish to filter using an aggregate computed conditional on some other expressions holding true. Using the TPC-H datasets, suppose that we want to filter customers based on the following criteria: Orders such that their amount exceeds the average amount for their sales region over the whole dataset. This can be computed any numbers of ways (such as joining auxiliary tables and filtering post-join)
Again, from prior examples, here are the joined up tables with all the customer data:
End of explanation
t2 = tpch.view()
conditional_avg = t2[(t2.region == tpch.region)].o_totalprice.mean()
Explanation: In this particular case, filtering based on the conditional average o_totalprice by region requires creating a table view (similar to the self-join examples from earlier) that can be treated as a distinct table entity in the expression. This would not be required if we were computing a conditional statistic from some other table. So this is a little more complicated than some other cases would be:
End of explanation
amount_filter = tpch.o_totalprice > conditional_avg
tpch[amount_filter].limit(10)
Explanation: Once you've done this, you can use the conditional average in a filter expression
End of explanation
tpch.count()
tpch[amount_filter].count()
Explanation: By looking at the table sizes before and after applying the filter you can see the relative size of the subset taken.
End of explanation
tpch.schema()
year = tpch.odate.year().name('year')
pre_sizes = tpch.group_by(year).size()
post_sizes = tpch[amount_filter].group_by(year).size().view()
percent = ((post_sizes['count'] / pre_sizes['count'].cast('double'))
.name('fraction'))
expr = (pre_sizes.join(post_sizes, pre_sizes.year == post_sizes.year)
[pre_sizes.year,
pre_sizes['count'].name('pre_count'),
post_sizes['count'].name('post_count'),
percent])
expr
Explanation: Or even group by year and compare before and after:
End of explanation
customer = con.table('tpch_customer')
orders = con.table('tpch_orders')
orders.limit(5)
Explanation: "Existence" filters
Some filtering involves checking for the existence of a particular value in a column of another table, or amount the results of some value expression. This is common in many-to-many relationships, and can be performed in numerous different ways, but it's nice to be able to express it with a single concise statement and let Ibis compute it optimally.
Here's some examples:
Filter down to customers having at least one open order
Find customers having no open orders with 1-URGENT status
Find stores (in the stores table) having the same name as a vendor (in the vendors table).
We'll go ahead and solve the first couple of these problems using the TPC-H tables to illustrate the API:
End of explanation
has_open_orders = ((orders.o_orderstatus == 'O') &
(customer.c_custkey == orders.o_custkey)).any()
Explanation: We introduce the any reduction:
End of explanation
customer[has_open_orders].limit(10)
Explanation: This is now a valid filter:
End of explanation
has_open_urgent_orders = ((orders.o_orderstatus == 'O') &
(orders.o_orderpriority == '1-URGENT') &
(customer.c_custkey == orders.o_custkey)).any()
Explanation: For the second example, in which we want to find customers not having any open urgent orders, we write down the condition that they do have some first:
End of explanation
customer[-has_open_urgent_orders].count()
Explanation: Now, we can negate this condition and use it as a filter:
End of explanation |
9,008 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Filtro dos 10 crimes com mais ocorrências em abril
Step1: Todas as ocorrências criminais de abril
Step2: As 5 regiões com mais ocorrências
Step3: Acima podemos ver que a região 1 teve o maior número de ocorrências criminais
Podemos agora ver quais são essas ocorrências de forma mais detalhada
Step4: Uma análise sobre as 5 ocorrências mais comuns
Step5: Filtro dos 10 horários com mais ocorrências em abril
Step6: Filtro dos 5 horários com mais ocorrências na região 1 (região com mais ocorrências em abril)
Step7: Filtro dos 10 bairros com mais ocorrências em abril
Step8: O Bairro com o maior número de ocorrências em abril foi o Jangurussú
Vamos agora ver de forma mais detalhadas quais foram estes crimes
Step9: Os 5 bairros mais comuns na região 1
Step10: Análise sobre o bairro Barra do Ceará | Python Code:
all_crime_tipos.head(10)
all_crime_tipos_top10 = all_crime_tipos.head(10)
all_crime_tipos_top10.plot(kind='barh', figsize=(12,6), color='#3f3fff')
plt.title('Top 10 crimes por tipo (Abr 2017)')
plt.xlabel('Número de crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 10 crimes com mais ocorrências em abril
End of explanation
all_crime_tipos
group_df_abril = df_abril.groupby('CLUSTER')
crimes = group_df_abril['NATUREZA DA OCORRÊNCIA'].count()
crimes.plot(kind='barh', figsize=(10,7), color='#3f3fff')
plt.title('Número de crimes por região (Abr 2017)')
plt.xlabel('Número')
plt.ylabel('Região')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Todas as ocorrências criminais de abril
End of explanation
regioes = df_abril.groupby('CLUSTER').count()
grupo_de_regioes = regioes.sort_values('NATUREZA DA OCORRÊNCIA', ascending=False)
grupo_de_regioes['TOTAL'] = grupo_de_regioes.ID
top_5_regioes_qtd = grupo_de_regioes.TOTAL.head(6)
top_5_regioes_qtd.plot(kind='barh', figsize=(10,4), color='#3f3fff')
plt.title('Top 5 regiões com mais crimes')
plt.xlabel('Número de crimes')
plt.ylabel('Região')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: As 5 regiões com mais ocorrências
End of explanation
regiao_1_detalhe = df_abril[df_abril['CLUSTER'] == 1]
regiao_1_detalhe
Explanation: Acima podemos ver que a região 1 teve o maior número de ocorrências criminais
Podemos agora ver quais são essas ocorrências de forma mais detalhada
End of explanation
crime_types = regiao_1_detalhe[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = regiao_1_detalhe[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
crimes_top_5 = all_crime_types.head(5)
crimes_top_5.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 crimes na região 1')
plt.xlabel('Número de crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Uma análise sobre as 5 ocorrências mais comuns
End of explanation
horas_mes = df_abril.HORA.value_counts()
horas_mes_top10 = horas_mes.head(10)
horas_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')
plt.title('Crimes por hora (Abr 2017)')
plt.xlabel('Número de ocorrências')
plt.ylabel('Hora do dia')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 10 horários com mais ocorrências em abril
End of explanation
crime_hours = regiao_1_detalhe[['HORA']]
crime_hours_total = crime_hours.groupby('HORA').size()
crime_hours_counts = regiao_1_detalhe[['HORA']].groupby('HORA').sum()
crime_hours_counts['TOTAL'] = crime_hours_total
all_hours_types = crime_hours_counts.sort_values(by='TOTAL', ascending=False)
all_hours_types.head(5)
all_hours_types_top5 = all_hours_types.head(5)
all_hours_types_top5.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 crimes por hora na região 1')
plt.xlabel('Número de ocorrências')
plt.ylabel('Hora do dia')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 5 horários com mais ocorrências na região 1 (região com mais ocorrências em abril)
End of explanation
crimes_mes = df_abril.BAIRRO.value_counts()
crimes_mes_top10 = crimes_mes.head(10)
crimes_mes_top10.plot(kind='barh', figsize=(11,4), color='#3f3fff')
plt.title('Top 10 Bairros com mais crimes (Abr 2017)')
plt.xlabel('Número de ocorrências')
plt.ylabel('Bairro')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Filtro dos 10 bairros com mais ocorrências em abril
End of explanation
barra_do_ceara = df_abril[df_abril['BAIRRO'] == 'JANGURUSSU']
crime_types = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
all_crime_tipos_5 = all_crime_types.head(5)
all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')
plt.title('Top 5 crimes no Jangurussú')
plt.xlabel('Número de Crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: O Bairro com o maior número de ocorrências em abril foi o Jangurussú
Vamos agora ver de forma mais detalhadas quais foram estes crimes
End of explanation
crime_types_bairro = regiao_1_detalhe[['BAIRRO']]
crime_type_total_bairro = crime_types_bairro.groupby('BAIRRO').size()
crime_type_counts_bairro = regiao_1_detalhe[['BAIRRO']].groupby('BAIRRO').sum()
crime_type_counts_bairro['TOTAL'] = crime_type_total_bairro
all_crime_types_bairro = crime_type_counts_bairro.sort_values(by='TOTAL', ascending=False)
crimes_top_5_bairro = all_crime_types_bairro.head(5)
crimes_top_5_bairro.plot(kind='barh', figsize=(11,3), color='#3f3fff')
plt.title('Top 5 bairros na região 1')
plt.xlabel('Quantidade')
plt.ylabel('Bairro')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Os 5 bairros mais comuns na região 1
End of explanation
barra_do_ceara = df_abril[df_abril['BAIRRO'] == 'BARRA DO CEARA']
crime_types = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']]
crime_type_total = crime_types.groupby('NATUREZA DA OCORRÊNCIA').size()
crime_type_counts = barra_do_ceara[['NATUREZA DA OCORRÊNCIA']].groupby('NATUREZA DA OCORRÊNCIA').sum()
crime_type_counts['TOTAL'] = crime_type_total
all_crime_types = crime_type_counts.sort_values(by='TOTAL', ascending=False)
all_crime_tipos_5 = all_crime_types.head(5)
all_crime_tipos_5.plot(kind='barh', figsize=(15,4), color='#3f3fff')
plt.title('Top 5 crimes na Barra do Ceará')
plt.xlabel('Número de Crimes')
plt.ylabel('Crime')
plt.tight_layout()
ax = plt.gca()
ax.xaxis.set_major_formatter(ticker.StrMethodFormatter('{x:,.0f}'))
plt.show()
Explanation: Análise sobre o bairro Barra do Ceará
End of explanation |
9,009 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step2: Example 1b
Step3: Simulation 1
Step4: Simulation 2
Step5: Simulation 3
Step6: Simulation 4
Step7: Simulation 5
Step8: Create Plot | Python Code:
import contextlib
import time
import numpy as np
from scipy.optimize import curve_fit
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
from qutip import *
from qutip.ipynbtools import HTMLProgressBar
from qutip.nonmarkov.heom import HEOMSolver, BosonicBath, DrudeLorentzBath, DrudeLorentzPadeBath, BathExponent
def cot(x):
Vectorized cotangent of x.
return 1. / np.tan(x)
@contextlib.contextmanager
def timer(label):
Simple utility for timing functions:
with timer("name"):
... code to time ...
start = time.time()
yield
end = time.time()
print(f"{label}: {end - start}")
# Defining the system Hamiltonian
eps = .0 # Energy of the 2-level system.
Del = .2 # Tunnelling term
Hsys = 0.5 * eps * sigmaz() + 0.5 * Del* sigmax()
# Initial state of the system.
rho0 = basis(2,0) * basis(2,0).dag()
# System-bath coupling (Drude-Lorentz spectral density)
Q = sigmaz() # coupling operator
# Bath properties (see Shi et al., J. Chem. Phys. 130, 084105 (2009)):
gamma = 1. # cut off frequency
lam = 2.5 # coupling strength
T = 1. # in units where Boltzmann factor is 1
beta = 1./T
#HEOM parameters
NC = 13 # cut off parameter for the bath
Nk = 1 # number of exponents to retain in the Matsubara expansion of the correlation function
# Times to solve for
tlist = np.linspace(0, np.pi/Del, 600)
# Plot the spectral density
w = np.linspace(0, 5, 1000)
J = w * 2 * lam * gamma / ((gamma**2 + w**2))
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
axes.plot(w, J, 'r', linewidth=2)
axes.set_xlabel(r'$\omega$', fontsize=28)
axes.set_ylabel(r'J', fontsize=28)
pass
# Define some operators with which we will measure the system
# 1,1 element of density matrix - corresonding to groundstate
P11p = basis(2, 0) * basis(2, 0).dag()
P22p = basis(2, 1) * basis(2, 1).dag()
# 1,2 element of density matrix - corresonding to coherence
P12p = basis(2, 0) * basis(2, 1).dag()
Explanation: Example 1b: Spin-Bath model (very strong coupling)
Introduction
The HEOM method solves the dynamics and steady state of a system and its environment, the latter of which is encoded in a set of auxiliary density matrices.
In this example we show the evolution of a single two-level system in contact with a single Bosonic environment. The properties of the system are encoded in Hamiltonian, and a coupling operator which describes how it is coupled to the environment.
The Bosonic environment is implicitly assumed to obey a particular Hamiltonian (see paper), the parameters of which are encoded in the spectral density, and subsequently the free-bath correlation functions.
In the example below we show how to model the overdamped Drude-Lorentz Spectral Density, commonly used with the HEOM. We show how to do this using the Matsubara, Pade and fitting decompositions, and compare their convergence.
This notebook shows a similar example to notebook 1a, but with much stronger coupling as discussed in Shi et al., J. Chem. Phys 130, 084105 (2009). Please refer to notebook 1a for a more detailed explanation.
Drude-Lorentz (overdamped) spectral density
The Drude-Lorentz spectral density is:
$$J_D(\omega)= \frac{2\omega\lambda\gamma}{{\gamma}^2 + \omega^2}$$
where $\lambda$ scales the coupling strength, and $\gamma$ is the cut-off frequency. We use the convention,
\begin{equation}
C(t) = \int_0^{\infty} d\omega \frac{J_D(\omega)}{\pi}[\coth(\beta\omega) \cos(\omega \tau) - i \sin(\omega \tau)]
\end{equation}
With the HEOM we must use an exponential decomposition:
\begin{equation}
C(t)=\sum_{k=0}^{k=\infty} c_k e^{-\nu_k t}
\end{equation}
As an example, the Matsubara decomposition of the Drude-Lorentz spectral density is given by:
\begin{equation}
\nu_k = \begin{cases}
\gamma & k = 0\
{2 \pi k} / {\beta } & k \geq 1\
\end{cases}
\end{equation}
\begin{equation}
c_k = \begin{cases}
\lambda \gamma (\cot(\beta \gamma / 2) - i) & k = 0\
4 \lambda \gamma \nu_k / {(nu_k^2 - \gamma^2)\beta } & k \geq 1\
\end{cases}
\end{equation}
Note that in the above, and the following, we set $\hbar = k_\mathrm{B} = 1$.
End of explanation
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)
with timer("RHS construction time"):
bath = DrudeLorentzBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)
HEOMMats = HEOMSolver(Hsys, bath, NC, options=options)
with timer("ODE solver time"):
resultMats = HEOMMats.run(rho0, tlist)
Explanation: Simulation 1: Matsubara decomposition, not using Ishizaki-Tanimura terminator
End of explanation
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)
with timer("RHS construction time"):
bath = DrudeLorentzBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)
_, terminator = bath.terminator()
Ltot = liouvillian(Hsys) + terminator
HEOMMatsT = HEOMSolver(Ltot, bath, NC, options=options)
with timer("ODE solver time"):
resultMatsT = HEOMMatsT.run(rho0, tlist)
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(8,8))
P11_mats = np.real(expect(resultMats.states, P11p))
axes.plot(tlist, np.real(P11_mats), 'b', linewidth=2, label="P11 (Matsubara)")
P11_matsT = np.real(expect(resultMatsT.states, P11p))
axes.plot(tlist, np.real(P11_matsT), 'b--', linewidth=2, label="P11 (Matsubara + Terminator)")
axes.set_xlabel(r't', fontsize=28)
axes.legend(loc=0, fontsize=12)
pass
Explanation: Simulation 2: Matsubara decomposition (including terminator)
End of explanation
# First, compare Matsubara and Pade decompositions
matsBath = DrudeLorentzBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)
padeBath = DrudeLorentzPadeBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)
# We will compare against a summation of {lmaxmats} Matsubara terms
lmaxmats = 15000
exactBath = DrudeLorentzBath(Q, lam=lam, gamma=gamma, T=T, Nk=lmaxmats, combine=False)
# Real and imag. parts of the correlation functions:
def CR(bath, t):
result = 0
for exp in bath.exponents:
if exp.type == BathExponent.types['R'] or exp.type == BathExponent.types['RI']:
result += exp.ck * np.exp(-exp.vk * t)
return result
def CI(bath, t):
result = 0
for exp in bath.exponents:
if exp.type == BathExponent.types['I']:
result += exp.ck * np.exp(exp.vk * t)
if exp.type == BathExponent.types['RI']:
result += exp.ck2 * np.exp(exp.vk * t)
return result
fig, (ax1, ax2) = plt.subplots(ncols=2, sharey=True, figsize=(16, 8))
ax1.plot(tlist, CR(exactBath, tlist), "r", linewidth=2, label=f"Mats (Nk={lmaxmats})")
ax1.plot(tlist, CR(matsBath, tlist), "g--", linewidth=2, label=f"Mats (Nk={Nk})")
ax1.plot(tlist, CR(padeBath, tlist), "b--", linewidth=2, label=f"Pade (Nk={Nk})")
ax1.set_xlabel(r't', fontsize=28)
ax1.set_ylabel(r"$C_R(t)$", fontsize=28)
ax1.legend(loc=0, fontsize=12)
tlist2=tlist[0:50]
ax2.plot(tlist2, np.abs(CR(matsBath, tlist2) - CR(exactBath, tlist2)), "g", linewidth=2, label=f"Mats Error")
ax2.plot(tlist2, np.abs(CR(padeBath, tlist2) - CR(exactBath, tlist2)), "b--", linewidth=2, label=f"Pade Error")
ax2.set_xlabel(r't', fontsize=28)
ax2.legend(loc=0, fontsize=12)
pass
options = Options(nsteps=15000, store_states=True, rtol=1e-14, atol=1e-14)
with timer("RHS construction time"):
bath = DrudeLorentzPadeBath(Q, lam=lam, gamma=gamma, T=T, Nk=Nk)
HEOMPade = HEOMSolver(Hsys, bath, NC, options=options)
with timer("ODE solver time"):
resultPade = HEOMPade.run(rho0, tlist)
# Plot the results
fig, axes = plt.subplots(figsize=(8,8))
axes.plot(tlist, np.real(P11_mats), 'b', linewidth=2, label="P11 (Matsubara)")
axes.plot(tlist, np.real(P11_matsT), 'b--', linewidth=2, label="P11 (Matsubara + Terminator)")
P11_pade = np.real(expect(resultPade.states, P11p))
axes.plot(tlist, np.real(P11_pade), 'r', linewidth=2, label="P11 (Pade)")
axes.set_xlabel(r't', fontsize=28)
axes.legend(loc=0, fontsize=12)
pass
Explanation: Simulation 3: Pade decomposition
End of explanation
# Fitting data
tlist_fit = np.linspace(0, 6, 10000)
corrRana = CR(exactBath, tlist_fit)
# Fitting procedure
def wrapper_fit_func(x, N, *args):
a, b = list(args[0][:N]), list(args[0][N:2*N])
return fit_func(x, a, b, N)
# actual fitting function
def fit_func(x, a, b, N):
tot = 0
for i in range(N):
tot += a[i]*np.exp(b[i]*x)
return tot
def fitter(ans, tlist, k):
# the actual computing of fit
popt = []
pcov = []
# tries to fit for k exponents
for i in range(k):
params_0 = [0]*(2*(i+1))
upper_a = abs(max(ans, key = abs))*10
#sets initial guess
guess = []
aguess = [ans[0]]*(i+1)#[max(ans)]*(i+1)
bguess = [0]*(i+1)
guess.extend(aguess)
guess.extend(bguess)
# sets bounds
# a's = anything , b's negative
# sets lower bound
b_lower = []
alower = [-upper_a]*(i+1)
blower = [-np.inf]*(i+1)
b_lower.extend(alower)
b_lower.extend(blower)
# sets higher bound
b_higher = []
ahigher = [upper_a]*(i+1)
bhigher = [0]*(i+1)
b_higher.extend(ahigher)
b_higher.extend(bhigher)
param_bounds = (b_lower, b_higher)
p1, p2 = curve_fit(lambda x, *params_0: wrapper_fit_func(x, i+1, params_0),
tlist, ans, p0=guess, sigma=([0.01] * len(tlist)),
bounds = param_bounds, maxfev = 1e8)
popt.append(p1)
pcov.append(p2)
return popt
# function that evaluates values with fitted params at given inputs
def checker(tlist, vals):
y = []
for i in tlist:
y.append(wrapper_fit_func(i, int(len(vals)/2), vals))
return y
# Fits of the real part with up to 3 exponents
k = 3
popt1 = fitter(corrRana, tlist_fit, k)
for i in range(k):
y = checker(tlist_fit, popt1[i])
plt.plot(tlist_fit, corrRana, tlist_fit, y)
plt.show()
# Set the exponential coefficients from the fit parameters
ckAR1 = list(popt1[k-1])[:len(list(popt1[k-1]))//2]
ckAR = [complex(x) for x in ckAR1]
vkAR1 = list(popt1[k-1])[len(list(popt1[k-1]))//2:]
vkAR = [complex(-x) for x in vkAR1]
# Imaginary part: use analytical value
ckAI = [complex(lam * gamma * (-1.0))]
vkAI = [complex(gamma)]
options = Options(nsteps=1500, store_states=True, rtol=1e-12, atol=1e-12)
with timer("RHS construction time"):
bath = BosonicBath(Q, ckAR, vkAR, ckAI, vkAI)
HEOMFit = HEOMSolver(Hsys, bath, NC, options=options)
with timer("ODE solver time"):
resultFit = HEOMFit.run(rho0, tlist)
Explanation: Simulation 4: Fitting approach
End of explanation
DL = ("2 * pi * 2.0 * {lam} / (pi * {gamma} * {beta}) if (w==0) "
"else 2 * pi * (2.0 * {lam} * {gamma} * w / (pi * (w**2 + {gamma}**2))) "
"* ((1 / (exp(w * {beta}) - 1)) + 1)").format(gamma=gamma, beta=beta, lam=lam)
options = Options(nsteps=15000, store_states=True, rtol=1e-12, atol=1e-12)
resultBR = brmesolve(Hsys, rho0, tlist, a_ops=[[sigmaz(), DL]], options=options)
qsave(resultMats, 'data/resultMatsOD')
qsave(resultMatsT, 'data/resultMatsTOD')
qsave(resultPade, 'data/resultPadeOD')
qsave(resultFit, 'data/resultFitOD')
qsave(resultBR, 'data/resultBROD')
Explanation: Simulation 5: Bloch-Redfield
End of explanation
with contextlib.redirect_stdout(None):
resultMats = qload('data/resultMatsOD')
resultMatsT = qload('data/resultMatsTOD')
resultPade = qload('data/resultPadeOD')
resultFit = qload('data/resultFitOD')
resultBR = qload('data/resultBROD')
matplotlib.rcParams['figure.figsize'] = (7, 5)
matplotlib.rcParams['axes.titlesize'] = 25
matplotlib.rcParams['axes.labelsize'] = 30
matplotlib.rcParams['xtick.labelsize'] = 28
matplotlib.rcParams['ytick.labelsize'] = 28
matplotlib.rcParams['legend.fontsize'] = 28
matplotlib.rcParams['axes.grid'] = False
matplotlib.rcParams['savefig.bbox'] = 'tight'
matplotlib.rcParams['lines.markersize'] = 5
matplotlib.rcParams['font.family'] = 'STIXgeneral'
matplotlib.rcParams['mathtext.fontset'] = 'stix'
matplotlib.rcParams["font.serif"] = "STIX"
matplotlib.rcParams['text.usetex'] = False
# Calculate expectation values in the bases
P11_mats = np.real(expect(resultMats.states, P11p))
P11_matsT = np.real(expect(resultMatsT.states, P11p))
P11_pade = np.real(expect(resultPade.states, P11p))
P11_fit = np.real(expect(resultFit.states, P11p))
P11_br = np.real(expect(resultBR.states, P11p))
# Plot the results
fig, axes = plt.subplots(1, 1, sharex=True, figsize=(12,7))
plt.yticks([0.99,1.0],[0.99,1])
axes.plot(tlist, np.real(P11_mats), 'b', linewidth=2, label=r"Matsubara $N_k=%d$" % Nk)
axes.plot(tlist, np.real(P11_matsT), 'g--', linewidth=3, label=r"Matsubara $N_k=%d$ & terminator" % Nk)
axes.plot(tlist, np.real(P11_pade), 'y-.', linewidth=2, label=r"Padé $N_k=%d$" % Nk)
# axes.plot(tlist, np.real(P11_br), 'y-.', linewidth=10, label="Bloch Redfield")
axes.plot(tlist, np.real(P11_fit), 'r',dashes=[3,2], linewidth=2, label=r"Fit $N_f = 3$, $N_k=15 \times 10^3$")
axes.locator_params(axis='y', nbins=6)
axes.locator_params(axis='x', nbins=6)
axes.set_ylabel(r'$\rho_{11}$',fontsize=30)
axes.set_xlabel(r'$t\;\gamma$',fontsize=30)
axes.set_xlim(tlist[0],tlist[-1])
axes.set_ylim(0.98405,1.0005)
axes.legend(loc=0)
#fig.savefig("figures/fig2.pdf")
pass
from qutip.ipynbtools import version_table
version_table()
Explanation: Create Plot
End of explanation |
9,010 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Variational Autoencoder to Generate Digital Numbers
Variational Autoencoders (VAEs) are very popular approaches to unsupervised learning of complicated distributions. In this example, we are going to use VAE to generate digital numbers.
In standard Autoencoder, we have an encoder network that takes in the original image and encode it into a vector of latent variables and a decoder network that takes in the latent vector and output an generated image that we hope to look similar to the original image.
In VAE, we constrain the latent variable to be unit gaussian, so that we can sample latent variables from a unit gaussian distribution, then use the decoder network to generate images.
So, we get the architecture above. Instead of generate the latent varibles directly, the encoder network output a mean vector and a variance (or log-variance) vector, and the decoder takes the sampled latent vector to generate the output image. And we add penalty on the latent distribution's KL Divergence to a unit gaussian distribution.
Define the Model
Step1: We are going to use a simple cnn network as our encoder and decoder. In decoder, we use SpatialFullConvolution (aka deconvolution or convolution transpose) layer to upsample the image to the original resolution.
Step2: Get the MNIST Dataset
Step3: Define our Training Objective
The size_average parameter in BCECriterion should be False, because when size_average is True, the negative_log_likelyhood computed in BCECriterion is average over each observations as well as dimensions, while in the KLDCriterion the KL-Divergence is sumed over each observations, the loss woule be wrong.
Step4: Compile the Model
Step5: Start Training
This step may take a while depending on your system.
Step6: Let's show the learning curve.
Step7: You can also open tensorboard to see this curve.
Sample Some Images from the Decoder
Step8: Explore the Latent Space | Python Code:
# a bit of setup
import numpy as np
from bigdl.nn.criterion import *
from bigdl.dataset import mnist
from zoo.pipeline.api.keras.layers import *
from zoo.pipeline.api.keras.models import Model
from zoo.pipeline.api.keras.utils import *
import datetime as dt
IMAGE_SIZE = 784
IMAGE_ROWS = 28
IMAGE_COLS = 28
IMAGE_CHANNELS = 1
latent_size = 2
from zoo.common.nncontext import *
sc = init_nncontext("Variational Autoencoder Example")
Explanation: Using Variational Autoencoder to Generate Digital Numbers
Variational Autoencoders (VAEs) are very popular approaches to unsupervised learning of complicated distributions. In this example, we are going to use VAE to generate digital numbers.
In standard Autoencoder, we have an encoder network that takes in the original image and encode it into a vector of latent variables and a decoder network that takes in the latent vector and output an generated image that we hope to look similar to the original image.
In VAE, we constrain the latent variable to be unit gaussian, so that we can sample latent variables from a unit gaussian distribution, then use the decoder network to generate images.
So, we get the architecture above. Instead of generate the latent varibles directly, the encoder network output a mean vector and a variance (or log-variance) vector, and the decoder takes the sampled latent vector to generate the output image. And we add penalty on the latent distribution's KL Divergence to a unit gaussian distribution.
Define the Model
End of explanation
def get_encoder(latent_size):
input0 = Input(shape=(IMAGE_CHANNELS, IMAGE_COLS, IMAGE_ROWS))
#CONV
conv1 = Convolution2D(16, 5, 5, input_shape=(IMAGE_CHANNELS, IMAGE_ROWS, IMAGE_COLS), border_mode='same',
subsample=(2, 2))(input0)
relu1 = LeakyReLU()(conv1)
conv2 = Convolution2D(32, 5, 5, input_shape=(16, 14, 14), border_mode='same', subsample=(2, 2))(relu1)
relu2 = LeakyReLU()(conv2) # 32,7,7
reshape = Flatten()(relu2)
#fully connected to output mean vector and log-variance vector
reshape = Reshape([7*7*32])(relu2)
z_mean = Dense(latent_size)(reshape)
z_log_var = Dense(latent_size)(reshape)
model = Model([input0],[z_mean,z_log_var])
return model
def get_decoder(latent_size):
input0 = Input(shape=(latent_size,))
reshape0 = Dense(1568)(input0)
reshape1 = Reshape((32, 7, 7))(reshape0)
relu0 = Activation('relu')(reshape1)
# use resize and conv layer instead of deconv layer
resize1 = ResizeBilinear(14,14)(relu0)
deconv1 = Convolution2D(16, 5, 5, subsample=(1, 1), activation='relu', border_mode = 'same', input_shape=(32, 14, 14))(resize1)
resize2 = ResizeBilinear(28,28)(deconv1)
deconv2 = Convolution2D(1, 5, 5, subsample=(1, 1), input_shape=(16, 28, 28), border_mode = 'same')(resize2)
outputs = Activation('sigmoid')(deconv2)
model = Model([input0],[outputs])
return model
def get_autoencoder(latent_size):
input0 = Input(shape=(IMAGE_CHANNELS, IMAGE_COLS, IMAGE_ROWS))
encoder = get_encoder(latent_size)(input0)
sample = GaussianSampler()(encoder)
decoder_model = get_decoder(latent_size)
decoder = decoder_model(sample)
model = Model([input0],[encoder,decoder])
return model,decoder_model
autoencoder,decoder_model = get_autoencoder(2)
Explanation: We are going to use a simple cnn network as our encoder and decoder. In decoder, we use SpatialFullConvolution (aka deconvolution or convolution transpose) layer to upsample the image to the original resolution.
End of explanation
def get_mnist(sc, mnist_path):
(train_images, train_labels) = mnist.read_data_sets(mnist_path, "train")
train_images = np.reshape(train_images, (60000, 1, 28, 28))
rdd_train_images = sc.parallelize(train_images)
rdd_train_sample = rdd_train_images.map(lambda img:
Sample.from_ndarray(
(img > 128) * 1.0,
[(img > 128) * 1.0, (img > 128) * 1.0]))
return rdd_train_sample
mnist_path = "datasets/mnist" # please replace this
train_data = get_mnist(sc, mnist_path)
# (train_images, train_labels) = mnist.read_data_sets(mnist_path, "train")
Explanation: Get the MNIST Dataset
End of explanation
batch_size = 100
criterion = ParallelCriterion()
criterion.add(KLDCriterion(), 1.0)
criterion.add(BCECriterion(size_average=False), 1.0/batch_size)
Explanation: Define our Training Objective
The size_average parameter in BCECriterion should be False, because when size_average is True, the negative_log_likelyhood computed in BCECriterion is average over each observations as well as dimensions, while in the KLDCriterion the KL-Divergence is sumed over each observations, the loss woule be wrong.
End of explanation
autoencoder.compile(optimizer=Adam(0.001), loss=criterion)
import os
if not os.path.exists("./log"):
os.makedirs("./log")
app_name='vae-digits-'+dt.datetime.now().strftime("%Y%m%d-%H%M%S")
autoencoder.set_tensorboard(log_dir='./log/',app_name=app_name)
print("Saving logs to ", app_name)
Explanation: Compile the Model
End of explanation
autoencoder.fit(x=train_data,
batch_size=batch_size,
nb_epoch = 6)
Explanation: Start Training
This step may take a while depending on your system.
End of explanation
import matplotlib
matplotlib.use('Agg')
%pylab inline
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import numpy as np
import datetime as dt
train_summary = TrainSummary('./log/', app_name)
loss = np.array(train_summary.read_scalar("Loss"))
plt.figure(figsize = (12,12))
plt.plot(loss[:,0],loss[:,1],label='loss')
plt.xlim(0,loss.shape[0]+10)
plt.grid(True)
plt.title("loss")
Explanation: Let's show the learning curve.
End of explanation
from matplotlib.pyplot import imshow
img = np.column_stack([decoder_model.forward(np.random.randn(1,2)).reshape(28,28) for s in range(8)])
imshow(img, cmap='gray')
Explanation: You can also open tensorboard to see this curve.
Sample Some Images from the Decoder
End of explanation
# This code snippet references this keras example (https://github.com/keras-team/keras/blob/master/examples/variational_autoencoder.py)
from scipy.stats import norm
# display a 2D manifold of the digits
n = 15 # figure with 15x15 digits
digit_size = 28
figure = np.zeros((digit_size * n, digit_size * n))
# linearly spaced coordinates on the unit square were transformed through the inverse CDF (ppf) of the Gaussian
# to produce values of the latent variables z, since the prior of the latent space is Gaussian
grid_x = norm.ppf(np.linspace(0.05, 0.95, n))
grid_y = norm.ppf(np.linspace(0.05, 0.95, n))
for i, yi in enumerate(grid_x):
for j, xi in enumerate(grid_y):
z_sample = np.array([[xi, yi]])
x_decoded = decoder_model.forward(z_sample)
digit = x_decoded.reshape(digit_size, digit_size)
figure[i * digit_size: (i + 1) * digit_size,
j * digit_size: (j + 1) * digit_size] = digit
plt.figure(figsize=(10, 10))
plt.imshow(figure, cmap='Greys_r')
plt.show()
Explanation: Explore the Latent Space
End of explanation |
9,011 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute all-to-all connectivity in sensor space
Computes the Phase Lag Index (PLI) between all gradiometers and shows the
connectivity in 3D using the helmet geometry. The left visual stimulation data
are used which produces strong connectvitiy in the right occipital sensors.
Step1: Set parameters | Python Code:
# Author: Martin Luessi <[email protected]>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.connectivity import spectral_connectivity
from mne.datasets import sample
from mne.viz import plot_sensors_connectivity
print(__doc__)
Explanation: Compute all-to-all connectivity in sensor space
Computes the Phase Lag Index (PLI) between all gradiometers and shows the
connectivity in 3D using the helmet geometry. The left visual stimulation data
are used which produces strong connectvitiy in the right occipital sensors.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for the visual condition
event_id, tmin, tmax = 3, -0.2, 1.5 # need a long enough epoch for 5 cycles
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))
# Compute connectivity for band containing the evoked response.
# We exclude the baseline period
fmin, fmax = 3., 9.
sfreq = raw.info['sfreq'] # the sampling frequency
tmin = 0.0 # exclude the baseline period
epochs.load_data().pick_types(meg='grad') # just keep MEG and no EOG now
con, freqs, times, n_epochs, n_tapers = spectral_connectivity(
epochs, method='pli', mode='multitaper', sfreq=sfreq, fmin=fmin, fmax=fmax,
faverage=True, tmin=tmin, mt_adaptive=False, n_jobs=1)
# Now, visualize the connectivity in 3D
plot_sensors_connectivity(epochs.info, con[:, :, 0])
Explanation: Set parameters
End of explanation |
9,012 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
HyperParameter Tuning
keras.wrappers.scikit_learn
Example adapted from
Step1: Data Preparation
Step2: Build Model
Step3: GridSearch HyperParameters | Python Code:
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
from keras import backend as K
from sklearn.model_selection import GridSearchCV
Explanation: HyperParameter Tuning
keras.wrappers.scikit_learn
Example adapted from: https://github.com/fchollet/keras/blob/master/examples/mnist_sklearn_wrapper.py
Problem:
Builds simple CNN models on MNIST and uses sklearn's GridSearchCV to find best model
End of explanation
nb_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# load training data and do basic data normalization
(X_train, y_train), (X_test, y_test) = mnist.load_data()
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
Explanation: Data Preparation
End of explanation
def make_model(dense_layer_sizes, filters, kernel_size, pool_size):
'''Creates model comprised of 2 convolutional layers followed by dense layers
dense_layer_sizes: List of layer sizes. This list has one number for each layer
nb_filters: Number of convolutional filters in each convolutional layer
nb_conv: Convolutional kernel size
nb_pool: Size of pooling area for max pooling
'''
model = Sequential()
model.add(Conv2D(filters, (kernel_size, kernel_size),
padding='valid', input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(filters, (kernel_size, kernel_size)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size)))
model.add(Dropout(0.25))
model.add(Flatten())
for layer_size in dense_layer_sizes:
model.add(Dense(layer_size))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
return model
dense_size_candidates = [[32], [64], [32, 32], [64, 64]]
my_classifier = KerasClassifier(make_model, batch_size=32)
Explanation: Build Model
End of explanation
validator = GridSearchCV(my_classifier,
param_grid={'dense_layer_sizes': dense_size_candidates,
# nb_epoch is avail for tuning even when not
# an argument to model building function
'epochs': [3, 6],
'filters': [8],
'kernel_size': [3],
'pool_size': [2]},
scoring='neg_log_loss',
n_jobs=1)
validator.fit(X_train, y_train)
print('The parameters of the best model are: ')
print(validator.best_params_)
# validator.best_estimator_ returns sklearn-wrapped version of best model.
# validator.best_estimator_.model returns the (unwrapped) keras model
best_model = validator.best_estimator_.model
metric_names = best_model.metrics_names
metric_values = best_model.evaluate(X_test, y_test)
for metric, value in zip(metric_names, metric_values):
print(metric, ': ', value)
Explanation: GridSearch HyperParameters
End of explanation |
9,013 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Concatenated all 4 databases into one called "MyLA311_All_Requests.csv"
Step1: Parsed the merged database for all empty coordinates and removed them
Step2: Splitting the CreatedDate column into two
Step3: Converting Date to date format and creating a new column for week numbers based on that
Step4: New CSV for dataframe with week number column added on called "311_week_number.csv"
Step5: Reading "311_week_number.csv" to check if task was accomplished | Python Code:
fifteen = pd.read_csv("MyLA311_Service_Request_Data_2015.csv", low_memory = False)
sixteen = pd.read_csv("MyLA311_Service_Request_Data_2016.csv", low_memory = False)
seventeen = pd.read_csv("MyLA311_Service_Request_Data_2017.csv", low_memory = False)
eighteen = pd.read_csv("MyLA311_Service_Request_Data_2018.csv", low_memory = False)
all_311_requests = pd.concat([fifteen, sixteen, seventeen, eighteen], axis=0).sort_index()
all_311_requests.to_csv("MyLA311_All_Requests.csv", encoding = 'utf-8', index = False)
Explanation: Concatenated all 4 databases into one called "MyLA311_All_Requests.csv"
End of explanation
my_311 = pd.read_csv("MyLA311_All_Requests.csv",sep=',', low_memory = False)
my_311['Longitude'].replace('', np.nan, inplace=True)
my_311.dropna(subset=['Longitude'], inplace=True)
my_311.to_csv('311_parsed_coordinates.csv', encoding = 'utf-8', index = False)
Explanation: Parsed the merged database for all empty coordinates and removed them
End of explanation
import pandas as pd
import numpy as np
import datetime
df = pd.read_csv('311_parsed_coordinates.csv')
df[['Created Date','Created Time']] = df.CreatedDate.str.split(expand=True)
df
Explanation: Splitting the CreatedDate column into two: one for the date, one for the time
End of explanation
df['Created Date'] = pd.to_datetime(df['Created Date'], errors='coerce')
df.sort_values('Created Date', inplace = True)
df['Week Number'] = df['Created Date'].dt.week
df
Explanation: Converting Date to date format and creating a new column for week numbers based on that
End of explanation
df.to_csv('311_week_number.csv', encoding = 'utf-8', index = False)
Explanation: New CSV for dataframe with week number column added on called "311_week_number.csv"
End of explanation
cols = ['RequestType', 'RequestSource', 'Address', 'Latitude' , 'Longitude', 'CD', 'Created Date', 'Created Time', 'Week Number']
data = pd.read_csv("311_week_number.csv", usecols = cols, low_memory = False)
data
df = data.dropna()
df
data['Created Date'] = pd.to_datetime(data['Created Date'], errors='coerce')
data['Created Year'] = data['Created Date'].dt.year
data.to_csv('311_parsed.csv', encoding = 'utf-8', index = False)
Explanation: Reading "311_week_number.csv" to check if task was accomplished
End of explanation |
9,014 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis
When you vizualize your network, you also want to analize the network. In this section, you can learn basic analysis methods to network. The methods used in this section are prepared in the Python's libraries, igraph, networkx and so on. So, we will also use them to analyze network and if you want to get more detail about this examples or find more information, please look at these documentations.
Further Information about igraph
igraph
Step1: Global Network Analysis
In this example, we will use "igraph" to analyze global network propety.
"igraph" are the very popular and useful package and you can analyze your own network by this.
And py2cytoscape can convert their own network to igraph object. So, first, we have to convert cytoscape network object to igraph object.
Step2: Density
Calculates the density of the graph.
Further information
Step3: Transitivity
There are three methods to calculate the transitivity in igraph. In the following methods, we will use "transitivity_undirected" method.
Method
Step4: community detection
Finds the community structure of the graph according to the algorithm of Clauset et al based on the greedy optimization of modularity.
http
Step5: Node Analysis
Closeness
Calculates the closeness centralities of given vertices in a graph.
http
Step6: Degree
indegree
Returns the in-degrees in a list. source code
http
Step7: PageRank
Calculates the Google PageRank values of a graph.
http
Step8: community detection
Finds the community structure of the graph according to the algorithm of Clauset et al based on the greedy optimization of modularity.
http
Step9: Edge Analysis
EdgeBetweenness
Calculates or estimates the edge betweennesses in a graph.
http
Step10: community detection
Community structure detection based on the betweenness of the edges in the network.
http | Python Code:
# import data from url
from py2cytoscape.data.cyrest_client import CyRestClient
from IPython.display import Image
# Create REST client for Cytoscape
cy = CyRestClient()
# Reset current session for fresh start
cy.session.delete()
# Load a sample network
network = cy.network.create_from('http://chianti.ucsd.edu/~kono/data/galFiltered.sif')
# Apply layout to the cytoscape network object
cy.layout.apply(network = network)
# Show it!!
network_png = network.get_png()
Image(network_png)
Explanation: Analysis
When you vizualize your network, you also want to analize the network. In this section, you can learn basic analysis methods to network. The methods used in this section are prepared in the Python's libraries, igraph, networkx and so on. So, we will also use them to analyze network and if you want to get more detail about this examples or find more information, please look at these documentations.
Further Information about igraph
igraph : http://igraph.org
Python-igraph : http://igraph.org/python/
networkx : https://networkx.github.io
Table of contents
Global Network Analysis
Density
Transivity
community detection
Node Analysis
Closeness
Degree
PageRank
community detection
Edge Analysis
EdgeBetweenness
community detection
Network Data Preparation
To prepare network analysis, let's load network data.
End of explanation
# covert cytoscape network object to igraph object
import igraph as ig
import py2cytoscape.util.util_igraph as util_ig
# convert cytoscape object to igraph object
g = util_ig.to_igraph(network.to_json())
Explanation: Global Network Analysis
In this example, we will use "igraph" to analyze global network propety.
"igraph" are the very popular and useful package and you can analyze your own network by this.
And py2cytoscape can convert their own network to igraph object. So, first, we have to convert cytoscape network object to igraph object.
End of explanation
density = g.density()
print(density)
Explanation: Density
Calculates the density of the graph.
Further information : http://igraph.org/python/doc/igraph.GraphBase-class.html#density
End of explanation
transitivity = g.transitivity_undirected()
print(transitivity)
Explanation: Transitivity
There are three methods to calculate the transitivity in igraph. In the following methods, we will use "transitivity_undirected" method.
Method : transitivity_avglocal_undirected
Calculates the average of the vertex transitivities of the graph.
http://igraph.org/python/doc/igraph.GraphBase-class.html#transitivity_avglocal_undirected
Method : transitivity_local_undirected
Calculates the local transitivity (clustering coefficient) of the given vertices in the graph.
http://igraph.org/python/doc/igraph.GraphBase-class.html#transitivity_local_undirected
Method : transitivity_undirected
Calculates the global transitivity (clustering coefficient) of the graph.
http://igraph.org/python/doc/igraph.GraphBase-class.html#transitivity_undirected
End of explanation
# If you want to use this method, you have to use the non-multiple-edges graph object.
#community_fastgreedy = g.community_fastgreedy()
#print(community_fastgreedy)
Explanation: community detection
Finds the community structure of the graph according to the algorithm of Clauset et al based on the greedy optimization of modularity.
http://igraph.org/python/doc/igraph.GraphBase-class.html#community_fastgreedy
End of explanation
closeness = g.closeness()
# Show 10 results of node closeness
print(closeness[0:9])
Explanation: Node Analysis
Closeness
Calculates the closeness centralities of given vertices in a graph.
http://igraph.org/python/doc/igraph.GraphBase-class.html#closeness
End of explanation
indegree = g.indegree()
outdegree = g.outdegree()
# Show 10 results of node degree
print(indegree[0:9])
print(outdegree[0:9])
Explanation: Degree
indegree
Returns the in-degrees in a list. source code
http://igraph.org/python/doc/igraph.Graph-class.html#indegree
outdegree
Returns the out-degrees in a list.
http://igraph.org/python/doc/igraph.Graph-class.html#outdegree
End of explanation
pagerank = g.pagerank()
# Show 10 results of node degree
print(pagerank[0:9])
Explanation: PageRank
Calculates the Google PageRank values of a graph.
http://igraph.org/python/doc/igraph.Graph-class.html#pagerank
End of explanation
# If you want to use this method, you have to use the non-multiple-edges graph object.
#community_fastgreedy = g.community_fastgreedy()
#print(community_fastgreedy)
Explanation: community detection
Finds the community structure of the graph according to the algorithm of Clauset et al based on the greedy optimization of modularity.
http://igraph.org/python/doc/igraph.GraphBase-class.html#community_fastgreedy
End of explanation
edge_betweenness = g.edge_betweenness()
print(edge_betweenness[0:9])
Explanation: Edge Analysis
EdgeBetweenness
Calculates or estimates the edge betweennesses in a graph.
http://igraph.org/python/doc/igraph.GraphBase-class.html#edge_betweenness
End of explanation
community_edge_betweenness_detection = g.community_edge_betweenness()
print(community_edge_betweenness_detection)
Explanation: community detection
Community structure detection based on the betweenness of the edges in the network.
http://igraph.org/python/doc/igraph.GraphBase-class.html#community_edge_betweenness
End of explanation |
9,015 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Function histogram
Synopse
Image histogram.
h = histogram(f)
f
Step1: Function Code for brute force implementation
Step2: Function code for bidimensional matrix implementation
Step3: Examples
Step4: Numerical examples
Step5: Example 1
Step6: Speed performance
Step7: Equation
$$ h(i) = card{p | f(p)=i} \
\text{or} \
h(i) = \sum_p \left{
\begin{array}{l l}
1 & \quad \text{if} \ f(p) = i\
0 & \quad \text{otherwise}\
\end{array} \right.
$$ | Python Code:
import numpy as np
def histogram(f):
return np.bincount(f.ravel())
Explanation: Function histogram
Synopse
Image histogram.
h = histogram(f)
f: Input image. Pixel data type must be integer.
h: Output, integer vector.
Description
This function computes the number of occurrence of each pixel value.
The function histogram_eq is implemented to show an implementation based
on the equation of the histogram.
Function Code
End of explanation
def histogram_eq(f):
from numpy import amax, zeros, arange, sum
n = amax(f) + 1
h = zeros((n,),int)
for i in arange(n):
h[i] = sum(i == f)
return h
Explanation: Function Code for brute force implementation
End of explanation
def histogram_eq1(f):
import numpy as np
n = f.size
m = f.max() + 1
haux = np.zeros((m,n),int)
fi = f.ravel()
i = np.arange(n)
haux[fi,i] = 1
h = np.add.reduce(haux,axis=1)
return h
Explanation: Function code for bidimensional matrix implementation
End of explanation
testing = (__name__ == "__main__")
if testing:
! jupyter nbconvert --to python histogram.ipynb
import numpy as np
import sys,os
import matplotlib.image as mpimg
ia898path = os.path.abspath('../../')
if ia898path not in sys.path:
sys.path.append(ia898path)
import ia898.src as ia
Explanation: Examples
End of explanation
if testing:
f = np.array([[0,1,2,3,4],
[4,3,2,1,1]], 'uint8')
h = ia.histogram(f)
print(h.dtype)
print(h)
if testing:
h1 = ia.histogram_eq(f)
print(h1.dtype)
print(h1)
if testing:
h1 = ia.histogram_eq1(f)
print(h1.dtype)
print(h1)
Explanation: Numerical examples
End of explanation
if testing:
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
f = mpimg.imread('../data/woodlog.tif')
plt.imshow(f,cmap='gray')
if testing:
h = ia.histogram(f)
plt.plot(h)
Explanation: Example 1
End of explanation
if testing:
import numpy as np
from numpy.random import rand, seed
seed([10])
f = (255 * rand(1000,1000)).astype('uint8')
%timeit h = ia.histogram(f)
%timeit h1 = ia.histogram_eq(f)
%timeit h2 = ia.histogram_eq1(f)
Explanation: Speed performance
End of explanation
if testing:
print(ia.histogram(np.array([3,7,0,0,3,0,10,7,0,7])) == \
np.array([4, 0, 0, 2, 0, 0, 0, 3, 0, 0, 1]))
Explanation: Equation
$$ h(i) = card{p | f(p)=i} \
\text{or} \
h(i) = \sum_p \left{
\begin{array}{l l}
1 & \quad \text{if} \ f(p) = i\
0 & \quad \text{otherwise}\
\end{array} \right.
$$
End of explanation |
9,016 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
An Introduction to pandas
Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true.
The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
Now let's start coding. Hopefully you did pip install pandas before you started up this notebook.
Step1: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd.
You don't have to, but every other person on the planet will be doing it, so you might as well.
Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete.
Step2: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
Selecting rows
Now let's look at our data, since that's what data is for
Step3: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
Step4: ...but maybe we want to see more than a measly five results?
Step5: But maybe we want to make a basketball joke and see the final four?
Step6: So yes, head and tail work kind of like the terminal commands. That's nice, I guess.
But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too.
Step7: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end.
Selecting columns
But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
Step8: NOTE
Step9: I want to know how many people are in each position. Luckily, pandas can tell me!
Step10: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details.
But now I'm curious about numbers
Step11: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing.
Step12: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
Manipulating data
Oh wait there is, HA HA HA.
Step13: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right?
Step14: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
Step15: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
Sorting and sub-selecting
Step16: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
Step17: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall.
First, we need to check out boolean things.
Step18: Drawing pictures
Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay.
Step19: matplotlib is a graphing library. It's the Python way to make graphs!
Step20: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot.
Step21: That might look better with a little more customization. So let's customize it.
Step22: I want more graphics! Do tall people make more money?!?! | Python Code:
# import pandas, but call it pd. Why? Because that's What People Do.
import pandas as pd
Explanation: An Introduction to pandas
Pandas! They are adorable animals. You might think they are the worst animal ever but that is not true. You might sometimes think pandas is the worst library every, and that is only kind of true.
The important thing is use the right tool for the job. pandas is good for some stuff, SQL is good for some stuff, writing raw Python is good for some stuff. You'll figure it out as you go along.
Now let's start coding. Hopefully you did pip install pandas before you started up this notebook.
End of explanation
# We're going to call this df, which means "data frame"
# It isn't in UTF-8 (I saved it from my mac!) so we need to set the encoding
df = pd.read_csv("NBA-Census-10.14.2013.csv", encoding ="mac_roman")
#this is a data frame (df)
Explanation: When you import pandas, you use import pandas as pd. That means instead of typing pandas in your code you'll type pd.
You don't have to, but every other person on the planet will be doing it, so you might as well.
Now we're going to read in a file. Our file is called NBA-Census-10.14.2013.csv because we're sports moguls. pandas can read_ different types of files, so try to figure it out by typing pd.read_ and hitting tab for autocomplete.
End of explanation
# Let's look at all of it
df
Explanation: A dataframe is basically a spreadsheet, except it lives in the world of Python or the statistical programming language R. They can't call it a spreadsheet because then people would think those programmers used Excel, which would make them boring and normal and they'd have to wear a tie every day.
Selecting rows
Now let's look at our data, since that's what data is for
End of explanation
# Look at the first few rows
df.head() #shows first 5 rows
Explanation: If we scroll we can see all of it. But maybe we don't want to see all of it. Maybe we hate scrolling?
End of explanation
# Let's look at MORE of the first few rows
df.head(10)
Explanation: ...but maybe we want to see more than a measly five results?
End of explanation
# Let's look at the final few rows
df.tail(4)
Explanation: But maybe we want to make a basketball joke and see the final four?
End of explanation
# Show the 6th through the 8th rows
df[5:8]
Explanation: So yes, head and tail work kind of like the terminal commands. That's nice, I guess.
But maybe we're incredibly demanding (which we are) and we want, say, the 6th through the 8th row (which we do). Don't worry (which I know you were), we can do that, too.
End of explanation
# Get the names of the columns, just because
#columns_we_want = ['Name', 'Age']
#df[columns_we_want]
# If we want to be "correct" we add .values on the end of it
df.columns
# Select only name and age
# Combing that with .head() to see not-so-many rows
columns_we_want = ['Name', 'Age']
df[columns_we_want].head()
# We can also do this all in one line, even though it starts looking ugly
# (unlike the cute bears pandas looks ugly pretty often)
df[['Name', 'Age',]].head()
Explanation: It's kind of like an array, right? Except where in an array we'd say df[0] this time we need to give it two numbers, the start and the end.
Selecting columns
But jeez, my eyes don't want to go that far over the data. I only want to see, uh, name and age.
End of explanation
df.head()
Explanation: NOTE: That was not df['Name', 'Age'], it was df[['Name', 'Age']]. You'll definitely type it wrong all of the time. When things break with pandas it's probably because you forgot to put in a million brackets.
Describing your data
A powerful tool of pandas is being able to select a portion of your data, because who ordered all that data anyway.
End of explanation
# Grab the POS column, and count the different values in it.
df['POS'].value_counts()
Explanation: I want to know how many people are in each position. Luckily, pandas can tell me!
End of explanation
#race
race_counts = df['Race'].value_counts()
race_counts
# Summary statistics for Age
df['Age'].describe()
df.describe()
# That's pretty good. Does it work for everything? How about the money?
df['2013 $'].describe()
#The result is the result, because the Money is a string.
Explanation: Now that was a little weird, yes - we used df['POS'] instead of df[['POS']] when viewing the data's details.
But now I'm curious about numbers: how old is everyone? Maybe we could, I don't know, get some statistics about age? Some statistics to describe age?
End of explanation
# Doing more describing
df['Ht (In.)'].describe()
Explanation: Unfortunately because that has dollar signs and commas it's thought of as a string. We'll fix it in a second, but let's try describing one more thing.
End of explanation
# Take another look at our inches, but only the first few
df['Ht (In.)'].head()
# Divide those inches by 12
#number_of_inches = 300
#number_of_inches / 12
df['Ht (In.)'].head() / 12
# Let's divide ALL of them by 12
df['Ht (In.)'] / 12
# Can we get statistics on those?
height_in_feet = df['Ht (In.)'] / 12
height_in_feet.describe()
# Let's look at our original data again
df.head(3)
Explanation: That's stupid, though, what's an inch even look like? What's 80 inches? I don't have a clue. If only there were some wa to manipulate our data.
Manipulating data
Oh wait there is, HA HA HA.
End of explanation
# Store a new column
df['feet'] = df['Ht (In.)'] / 12
df.head()
Explanation: Okay that was nice but unfortunately we can't do anything with it. It's just sitting there, separate from our data. If this were normal code we could do blahblah['feet'] = blahblah['Ht (In.)'] / 12, but since this is pandas, we can't. Right? Right?
End of explanation
# Can't just use .replace
# Need to use this weird .str thing
# Can't just immediately replace the , either
# Need to use the .str thing before EVERY string method
# Describe still doesn't work.
# Let's convert it to an integer using .astype(int) before we describe it
# Maybe we can just make them millions?
# Unfortunately one is "n/a" which is going to break our code, so we can make n/a be 0
# Remove the .head() piece and save it back into the dataframe
Explanation: That's cool, maybe we could do the same thing with their salary? Take out the $ and the , and convert it to an integer?
End of explanation
# This is just the first few guys in the dataset. Can we order it?
# Let's try to sort them, ascending value
df.sort_values('feet')
Explanation: The average basketball player makes 3.8 million dollars and is a little over six and a half feet tall.
But who cares about those guys? I don't care about those guys. They're boring. I want the real rich guys!
Sorting and sub-selecting
End of explanation
# It isn't descending = True, unfortunately
df.sort_values('feet', ascending=False).head()
# We can use this to find the oldest guys in the league
df.sort_values('Age', ascending=False).head()
# Or the youngest, by taking out 'ascending=False'
df.sort_values('feet').head()
Explanation: Those guys are making nothing! If only there were a way to sort from high to low, a.k.a. descending instead of ascending.
End of explanation
# Get a big long list of True and False for every single row.
df['feet'] > 6.5
# We could use value counts if we wanted
above_or_below_six_five = df['feet'] > 6.5
above_or_below_six_five.value_counts()
# But we can also apply this to every single row to say whether YES we want it or NO we don't
# Instead of putting column names inside of the brackets, we instead
# put the True/False statements. It will only return the players above
# seven feet tall
df[df['feet'] > 6.5]
df['Race'] == 'Asian'
df[]
# Or only the guards
df['POS'] == 'G'.head()
#People below 6 feet
df['feet'] < 6.5
#Every column you ant to query needs parenthesis aroung it
#Guards that are higher than 6.5
#this is combination of both
df[(df['POS'] == 'G') & (df['feet'] < 6.5)].head()
#We can save stuff
centers = df[df['POS'] == 'C']
guards = df[df['POS'] == 'G']
centers['feet'].describe()
guards['feet'].describe()
# It might be easier to break down the booleans into separate variables
# We can save this stuff
# Maybe we can compare them to taller players?
Explanation: But sometimes instead of just looking at them, I want to do stuff with them. Play some games with them! Dunk on them~ describe them! And we don't want to dunk on everyone, only the players above 7 feet tall.
First, we need to check out boolean things.
End of explanation
!pip install matplotlib
# This will scream we don't have matplotlib.
df['feet'].hist()
Explanation: Drawing pictures
Okay okay enough code and enough stupid numbers. I'm visual. I want graphics. Okay????? Okay.
End of explanation
%matplotlib inline
df['feet'].hist()
# this will open up a weird window that won't do anything
import matplot.
# So instead you run this code
plt.style.use('fivethirtyeight')
df['feet'].hist()
Explanation: matplotlib is a graphing library. It's the Python way to make graphs!
End of explanation
# Import matplotlib
# What's available?
# Use ggplot
# Make a histogram
# Try some other styles
Explanation: But that's ugly. There's a thing called ggplot for R that looks nice. We want to look nice. We want to look like ggplot.
End of explanation
# Pass in all sorts of stuff!
# Most from http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.hist.html
# .range() is a matplotlib thing
Explanation: That might look better with a little more customization. So let's customize it.
End of explanation
# How does experience relate with the amount of money they're making?
# At least we can assume height and weight are related
# At least we can assume height and weight are related
# http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
# We can also use plt separately
# It's SIMILAR but TOTALLY DIFFERENT
Explanation: I want more graphics! Do tall people make more money?!?!
End of explanation |
9,017 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 15
Step1: To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow).
Step2: Let's view some of the images to get an idea of what they look like.
Step3: Now we can create our GAN. Like in the last tutorial, it consists of two parts
Step4: Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times.
One other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn.
WGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify generator_steps=0.2 so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results.
Step5: Let's generate some data and see how the results look. | Python Code:
!curl -Lo conda_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import conda_installer
conda_installer.install()
!/root/miniconda/bin/conda info -e
!pip install --pre deepchem
import deepchem
deepchem.__version__
Explanation: Tutorial Part 15: Training a Generative Adversarial Network on MNIST
In this tutorial, we will train a Generative Adversarial Network (GAN) on the MNIST dataset. This is a large collection of 28x28 pixel images of handwritten digits. We will try to train a network to produce new images of handwritten digits.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
To run DeepChem within Colab, you'll need to run the following cell of installation commands. This will take about 5 minutes to run to completion and install your environment.
End of explanation
import deepchem as dc
import tensorflow as tf
from deepchem.models.optimizers import ExponentialDecay
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Reshape
import matplotlib.pyplot as plot
import matplotlib.gridspec as gridspec
%matplotlib inline
mnist = tf.keras.datasets.mnist.load_data(path='mnist.npz')
images = mnist[0][0].reshape((-1, 28, 28, 1))/255
dataset = dc.data.NumpyDataset(images)
Explanation: To begin, let's import all the libraries we'll need and load the dataset (which comes bundled with Tensorflow).
End of explanation
def plot_digits(im):
plot.figure(figsize=(3, 3))
grid = gridspec.GridSpec(4, 4, wspace=0.05, hspace=0.05)
for i, g in enumerate(grid):
ax = plot.subplot(g)
ax.set_xticks([])
ax.set_yticks([])
ax.imshow(im[i,:,:,0], cmap='gray')
plot_digits(images)
Explanation: Let's view some of the images to get an idea of what they look like.
End of explanation
class DigitGAN(dc.models.WGAN):
def get_noise_input_shape(self):
return (10,)
def get_data_input_shapes(self):
return [(28, 28, 1)]
def create_generator(self):
return tf.keras.Sequential([
Dense(7*7*8, activation=tf.nn.relu),
Reshape((7, 7, 8)),
Conv2DTranspose(filters=16, kernel_size=5, strides=2, activation=tf.nn.relu, padding='same'),
Conv2DTranspose(filters=1, kernel_size=5, strides=2, activation=tf.sigmoid, padding='same')
])
def create_discriminator(self):
return tf.keras.Sequential([
Conv2D(filters=32, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'),
Conv2D(filters=64, kernel_size=5, strides=2, activation=tf.nn.leaky_relu, padding='same'),
Dense(1, activation=tf.math.softplus)
])
gan = DigitGAN(learning_rate=ExponentialDecay(0.001, 0.9, 5000))
Explanation: Now we can create our GAN. Like in the last tutorial, it consists of two parts:
The generator takes random noise as its input and produces output that will hopefully resemble the training data.
The discriminator takes a set of samples as input (possibly training data, possibly created by the generator), and tries to determine which are which.
This time we will use a different style of GAN called a Wasserstein GAN (or WGAN for short). In many cases, they are found to produce better results than conventional GANs. The main difference between the two is in the discriminator (often called a "critic" in this context). Instead of outputting the probability of a sample being real training data, it tries to learn how to measure the distance between the training distribution and generated distribution. That measure can then be directly used as a loss function for training the generator.
We use a very simple model. The generator uses a dense layer to transform the input noise into a 7x7 image with eight channels. That is followed by two convolutional layers that upsample it first to 14x14, and finally to 28x28.
The discriminator does roughly the same thing in reverse. Two convolutional layers downsample the image first to 14x14, then to 7x7. A final dense layer produces a single number as output. In the last tutorial we used a sigmoid activation to produce a number between 0 and 1 that could be interpreted as a probability. Since this is a WGAN, we instead use a softplus activation. It produces an unbounded positive number that can be interpreted as a distance.
End of explanation
def iterbatches(epochs):
for i in range(epochs):
for batch in dataset.iterbatches(batch_size=gan.batch_size):
yield {gan.data_inputs[0]: batch[0]}
gan.fit_gan(iterbatches(100), generator_steps=0.2, checkpoint_interval=5000)
Explanation: Now to train it. As in the last tutorial, we write a generator to produce data. This time the data is coming from a dataset, which we loop over 100 times.
One other difference is worth noting. When training a conventional GAN, it is important to keep the generator and discriminator in balance thoughout training. If either one gets too far ahead, it becomes very difficult for the other one to learn.
WGANs do not have this problem. In fact, the better the discriminator gets, the cleaner a signal it provides and the easier it becomes for the generator to learn. We therefore specify generator_steps=0.2 so that it will only take one step of training the generator for every five steps of training the discriminator. This tends to produce faster training and better results.
End of explanation
plot_digits(gan.predict_gan_generator(batch_size=16))
Explanation: Let's generate some data and see how the results look.
End of explanation |
9,018 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First Python Notebook
Step1: There. You've just written your first Python code. You've entered two integers (the 2's) and added them together using the plus sign operator. Not so bad, right?
Next, let's introduce one of the basics of computer programming, a variable.
Variables are like containers that hold different types of data so you can go back and refer to them later. They’re fundamental to programming in any language, and you’ll use them all the time when you're writing Python.
Move down to the next box. Now let's put that number two into our first variable.
Step2: In this case, we’ve created a variable called san and assigned it the integer value 2.
In Python, variable assignment is done with the = sign. On the left is the name of the variable you want to create (it can be anything) and on the right is the value that you want to assign to that variable.
If we use the print command on the variable, Python will output its contents to the terminal because that value is stored in the variable. Let's try it.
Step3: We can do the same thing again with a different variable name
Step4: Then add those two together the same way we added the numbers at the top.
Step5: Variables can contain many different kinds of data types. There are integers, strings, floating point numbers (decimals), lists and dictionaries.
Step6: Playing with data we invent can be fun, but it's a long way from investigative journalism.
Now's the time for us to get our hands on some real data and get some real work done.
Your assignment
Step7: Print that variable and you see that open has created a file "object" that offers a number of different ways to interact with the contents of the file.
Step8: One thing a file object can do is read in all of the data from the file. Let's do that next and store the contents in a new variable.
Step9: That's all good, but the data is printing out as one big long string. If we're going to do some real analysis, we need Python to recognize and respect the structure of our data, in the way an Excel spreadsheet would.
To do that, we're going to need something smarter than open. We're going to need something like pandas.
Act 3
Step10: Opening our CSV isn't any harder than with open, you just need to know the right trick to make it work.
Step11: Great. Now let's do it again and assign it to a variable this time
Step12: Now let's see what that returns when we print it.
Step13: Here's how you can see the first few rows
Step14: How many rows are there? Here's how to find out.
Step15: Even with that simple question and answer, we've begun the process of interviewing our data.
In some ways, your database is no different from a human source. Getting a good story requires careful, thorough questioning.
In the next section we will move ahead by conducting an interview with pandas to pursue our quest of finding out the biggest donors to Proposition 64.
Act 4
Step16: We've got it sorted the wrong way. Let's reverse it.
Step17: Now let's limit it to the top 10.
Step18: What is the total sum of contributions that have been reported?
First, let's get our hands on the column with our numbers in it. In pandas you can do that like so.
Step19: Now adding it up is this easy.
Step20: There's our big total. Why is it lower than the ones I quoted above? That's because campaigns are only required to report the names of donors over $200, so our data is missing all of the donors who gave smaller amounts of money.
The overall totals are reported elsewhere in lump sums and cannot be replicated by adding up the individual contributions. Understanding this is crucial to understanding not just this data, but all campaign finance data, which typically has this limitation.
Filtering
Adding up a big total is all well and good. But we're aiming for something more nuanced. We want to separate the money for the proposition from the money against it. To do that, we'll need to learn how to filter.
First let's look at the column we're going to filter by
Step21: Now let's filter using that column using pandas oddball method
Step22: Stick that in a variable
Step23: So now we can ask
Step24: Next
Step25: Now let's ask the same questions of the opposing side.
Step26: How about the sum total of contributions for each?
Step27: Grouping
One thing we noticed as we explored the data is that there are a lot of different committees. A natural question follows
Step28: Wow. That's pretty ugly. Why? Because pandas is weird.
To convert a raw dump like that into a clean table (known in pandas slang as a "dataframe") you can add the reset_index method to the end of your code. Don't ask why.
Step29: Now let's sort it by size
Step30: Okay. Committees are good. But what about something a little more interesting. Who has given the most money?
To do that, we'll group by the two name columns at the same time.
Step31: But which side where they are? Add in the position column to see that too.
Step32: Pretty cool, right? Now now that we've got this interesting list of people, let's see if we can make a chart out of it.
Act 5
Step33: Before we'll get started, let's run one more trick to configure matplotlib to show its charts in our notebook.
Step34: Now let's save the data we want to chart into a variable
Step35: Making a quick bar chart is as easy as this.
Step36: It's really those first five that are the most interesting, so let's trim our chart.
Step37: What are those y axis labels? Those are the row number (pandas calls them indexes) of each row. We don't want that. We want the names.
Step38: Okay, but what if I want to combine the first and last name?
First, make a new column. First let's look at what we have now.
Step39: In plain old Python, we created a string at the start of our less. Remember this?
Step40: Combining strings can be as easy as addition.
Step41: And if we want to get a space in there yet we can do something like
Step42: And guess what we can do the same thing with two columns in our table, and use a pandas trick that will apply it to every row.
Step43: Now let's see the results
Step44: Now let's chart that.
Step45: That's all well and good, but this chart is pretty ugly. If you wanted to hand this data off to your graphics department, or try your hand at a simple chart yourself using something like Chartbuilder, you'd need to export this data into a spreadsheet.
It's this easy. | Python Code:
2+2
Explanation: First Python Notebook: Scripting your way to the story
By Ben Welsh
A step-by-step guide to analyzing data with Python and the Jupyter Notebook.
This tutorial will teach you how to use computer programming tools to analyze data by exploring contributors to campaigns for and again Proposition 64, a ballot measure asking California voters to decide if recreational marijuana should be legalized.
This guide was developed by Ben Welsh for a Oct. 2, 2016, "watchdog workshop" organized by Investigative Reporters and Editors at San Diego State University's school of journalism. The class is designed for beginners who have zero Python experience.
Prelude: Prequisites
Before you can begin, your computer needs the following tools installed and working to participate.
A command-line interface to interact with your computer
Version 2.7 of the Python programming language
The pip package manager and virtualenv environment manager for Python
Command-line interface
Unless something is wrong with your computer, there should be a way to open a window that lets you type in commands. Different operating systems give this tool slightly different names, but they all have some form of it, and there are alternative programs you can install as well.
On Windows you can find the command-line interface by opening the "command prompt." Here are instructions for Windows 10 and for Windows 8 and earlier versions. On Apple computers, you open the "Terminal" application. Ubuntu Linux comes with a program of the same name.
Python
If you are using Mac OSX or a common flavor of Linux, Python version 2.7 is probably already installed and you can test to see what version, if any, is already available by typing the following into your terminal.
bash
python -V
Even if you find it already on your machine, Mac users should install it separately by following these instructions offered by The Hitchhikers Guide to Python.
Windows people can find a similar guide here which will have them try downloading and installing Python from here.
pip and virtualenv
The pip package manager makes it easy to install open-source libraries that expand what you're able to do with Python. Later, we will use it to install everything needed to create a working web application.
If you don't have it already, you can get pip by following these instructions. In Windows, it's necessary to make sure that the Python Scripts directory is available on your system's PATH so it can be called from anywhere on the command line. This screencast can help.
Verify pip is installed with the following.
bash
pip -V
The virtualenv environment manager makes it possible to create an isolated corner of your computer where all the different tools you use to build an application are sealed off.
It might not be obvious why you need this, but it quickly becomes important when you need to juggle different tools
for different projects on one computer. By developing your applications inside separate virtualenv environments, you can use different versions of the same third-party Python libraries without a conflict. You can also more easily recreate your project on another machine, handy when you want to copy your code to a server that publishes pages on the Internet.
You can check if virtualenv is installed with the following.
bash
virtualenv --version
If you don't have it, install it with pip.
```bash
pip install virtualenv
If you're on a Mac or Linux and get an error saying you lack permissions, try again as a superuser.
sudo pip install virtualenv
```
If that doesn't work, try following this advice.
Act 1: Hello Jupyter Notebook
Start by creating a new development environment with virtualenv in your terminal. Name it after our application.
bash
virtualenv first-python-notebook
Jump into the directory it created.
bash
cd first-python-notebook
Turn on the new virtualenv, which will instruct your terminal to only use those libraries installed
inside its sealed space. You only need to create the virtualenv once, but you'll need to repeat these
"activation" steps each time you return to working on this project.
```bash
In Linux or Mac OSX try this...
. bin/activate
In Windows it might take something more like...
cd Scripts
activate
cd ..
```
Use pip on the command line to install Jupyter Notebook, an open-source tool for writing and sharing Python scripts.
bash
pip install jupyter
Start up the notebook from your terminal.
bash
jupyter notebook
That will open up a new tab in your default web browser that looks something like this:
Click the "New" button in the upper right and create a new Python 2 notebook. Now you're all setup and ready to start writing code.
Act 2: Hello Python
You are now ready to roll within the Jupyter Notebook's framework for writing Python. Don't stress. There's nothing too fancy about it. You can start by just doing a little simple math. Type the following into the first box, then hit the play button in the toolbox (or hit SHIFT+ENTER on your keyboard).
End of explanation
san = 2
Explanation: There. You've just written your first Python code. You've entered two integers (the 2's) and added them together using the plus sign operator. Not so bad, right?
Next, let's introduce one of the basics of computer programming, a variable.
Variables are like containers that hold different types of data so you can go back and refer to them later. They’re fundamental to programming in any language, and you’ll use them all the time when you're writing Python.
Move down to the next box. Now let's put that number two into our first variable.
End of explanation
print san
Explanation: In this case, we’ve created a variable called san and assigned it the integer value 2.
In Python, variable assignment is done with the = sign. On the left is the name of the variable you want to create (it can be anything) and on the right is the value that you want to assign to that variable.
If we use the print command on the variable, Python will output its contents to the terminal because that value is stored in the variable. Let's try it.
End of explanation
diego = 2
Explanation: We can do the same thing again with a different variable name
End of explanation
san + diego
Explanation: Then add those two together the same way we added the numbers at the top.
End of explanation
string = "Hello"
decimal = 1.2
list_of_strings = ["a", "b", "c", "d"]
list_of_integers = [1, 2, 3, 4]
list_of_whatever = ["a", 2, "c", 4]
my_phonebook = {'Mom': '713-555-5555', 'Chinese Takeout': '573-555-5555'}
Explanation: Variables can contain many different kinds of data types. There are integers, strings, floating point numbers (decimals), lists and dictionaries.
End of explanation
data_file = open("./first-python-notebook.csv", "r")
Explanation: Playing with data we invent can be fun, but it's a long way from investigative journalism.
Now's the time for us to get our hands on some real data and get some real work done.
Your assignment: Proposition 64.
The use and sale of marijuana for recreational purposes is illegal in California. Proposition 64, scheduled to appear on the November 2016 ballot, asked voters if it ought to be legalized. A "yes" vote would support legalization. A "no" vote would oppose it. A similar measure, Proposition 19, was defeated in 2010.
According to California's Secretary of State, more than $16 million was been raised to campaign in support of Prop. 64 as of September 20. Just over 2 million was been raised to oppose it.
Your mission, should you choose to accept it, is to download a list of campaign contributors and figure out the biggest donors both for and against the measure.
Click here to download the file as a list of comma-separated values. This is known as a CSV file. It is the most common way you will find data published online. Save the file with the name first-python-notebook.csv in the same directory where you made this notebook.
Python can read files using the built-in open function. You feed two things into it: 1) The path to the file; 2) What type of operation you'd like it to execute on the file. "r" stands for read.
End of explanation
print data_file
Explanation: Print that variable and you see that open has created a file "object" that offers a number of different ways to interact with the contents of the file.
End of explanation
data = data_file.read()
print data
Explanation: One thing a file object can do is read in all of the data from the file. Let's do that next and store the contents in a new variable.
End of explanation
import pandas
Explanation: That's all good, but the data is printing out as one big long string. If we're going to do some real analysis, we need Python to recognize and respect the structure of our data, in the way an Excel spreadsheet would.
To do that, we're going to need something smarter than open. We're going to need something like pandas.
Act 3: Hello pandas
Lucky for us, Python already has tools filled with functions to do pretty much anything you’d ever want to do with a programming language: navigate the web, parse data, interact with a database, run fancy statistics, build a pretty website and so much more.
Some of those tools are included a toolbox that comes with the language, known as the standard library. Others have been built by members of Python's developer community and need to be downloaded and installed from the web.
For this exercise, we're going to install and use pandas, a tool developed by a financial investment firm that has become the leading open-source tool for accessing and analyzing data.
There are several others we could use instead (like agate) but we're picking pandas here because it's the most popular and powerful.
We'll install pandas the same way we installed the Jupyter Notebook earlier: Our friend pip. Save your notebook, switch to your window/command prompt and hit CTRL-C. That will kill your notebook and return you to the command line. There we'll install pandas.
bash
pip install pandas
Now let's restart our notebook and get back to work.
bash
jupyter notebook
Use the next open box to import pandas into our script, so we can use all its fancy methods here in our script.
End of explanation
pandas.read_csv("./first-python-notebook.csv")
Explanation: Opening our CSV isn't any harder than with open, you just need to know the right trick to make it work.
End of explanation
table = pandas.read_csv("./first-python-notebook.csv")
Explanation: Great. Now let's do it again and assign it to a variable this time
End of explanation
table.info()
Explanation: Now let's see what that returns when we print it.
End of explanation
table.head()
Explanation: Here's how you can see the first few rows
End of explanation
print len(table)
Explanation: How many rows are there? Here's how to find out.
End of explanation
table.sort_values("AMOUNT")
Explanation: Even with that simple question and answer, we've begun the process of interviewing our data.
In some ways, your database is no different from a human source. Getting a good story requires careful, thorough questioning.
In the next section we will move ahead by conducting an interview with pandas to pursue our quest of finding out the biggest donors to Proposition 64.
Act 4: Hello analysis
Let's start with something easy. What are the ten biggest contributions?
That will require a sort using the column with the money in it.
End of explanation
table.sort_values("AMOUNT", ascending=False)
Explanation: We've got it sorted the wrong way. Let's reverse it.
End of explanation
table.sort_values("AMOUNT", ascending=False).head(10)
Explanation: Now let's limit it to the top 10.
End of explanation
table['AMOUNT']
Explanation: What is the total sum of contributions that have been reported?
First, let's get our hands on the column with our numbers in it. In pandas you can do that like so.
End of explanation
table['AMOUNT'].sum()
Explanation: Now adding it up is this easy.
End of explanation
table['COMMITTEE_POSITION']
Explanation: There's our big total. Why is it lower than the ones I quoted above? That's because campaigns are only required to report the names of donors over $200, so our data is missing all of the donors who gave smaller amounts of money.
The overall totals are reported elsewhere in lump sums and cannot be replicated by adding up the individual contributions. Understanding this is crucial to understanding not just this data, but all campaign finance data, which typically has this limitation.
Filtering
Adding up a big total is all well and good. But we're aiming for something more nuanced. We want to separate the money for the proposition from the money against it. To do that, we'll need to learn how to filter.
First let's look at the column we're going to filter by
End of explanation
table[table['COMMITTEE_POSITION'] == 'SUPPORT']
Explanation: Now let's filter using that column using pandas oddball method
End of explanation
support_table = table[table['COMMITTEE_POSITION'] == 'SUPPORT']
Explanation: Stick that in a variable
End of explanation
print len(support_table)
Explanation: So now we can ask: How many contributions does the supporting side have?
End of explanation
support_table.sort_values("AMOUNT", ascending=False).head(10)
Explanation: Next: What are the 10 biggest supporting contributions?
End of explanation
oppose_table = table[table['COMMITTEE_POSITION'] == 'OPPOSE']
print len(oppose_table)
oppose_table.sort_values("AMOUNT", ascending=False).head(10)
Explanation: Now let's ask the same questions of the opposing side.
End of explanation
support_table['AMOUNT'].sum()
oppose_table['AMOUNT'].sum()
Explanation: How about the sum total of contributions for each?
End of explanation
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum()
Explanation: Grouping
One thing we noticed as we explored the data is that there are a lot of different committees. A natural question follows: Which ones have raised the most money?
To figure that out, we'll need to group the data by that column and sum up the amount column for each. Here's how pandas does that.
End of explanation
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum().reset_index()
Explanation: Wow. That's pretty ugly. Why? Because pandas is weird.
To convert a raw dump like that into a clean table (known in pandas slang as a "dataframe") you can add the reset_index method to the end of your code. Don't ask why.
End of explanation
table.groupby("COMMITTEE_NAME")['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
Explanation: Now let's sort it by size
End of explanation
table.groupby(["FIRST_NAME", "LAST_NAME"])['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
Explanation: Okay. Committees are good. But what about something a little more interesting. Who has given the most money?
To do that, we'll group by the two name columns at the same time.
End of explanation
table.groupby([
"FIRST_NAME",
"LAST_NAME",
"COMMITTEE_POSITION"
])['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False)
Explanation: But which side where they are? Add in the position column to see that too.
End of explanation
import matplotlib.pyplot as plt
Explanation: Pretty cool, right? Now now that we've got this interesting list of people, let's see if we can make a chart out of it.
Act 5: Hello viz
Python has a number of charting tools that can work hand in hand with pandas. The most popular is matplotlib. It isn't the prettiest thing in the world, but it offers some reasonably straightfoward tools for making quick charts. And, best of all, it can display right here in our Jupyter Notebook.
Before we start, we'll need to make sure matplotlib is installed. Return to your terminal and try installing it with our buddy pip, as we installed other things before.
bash
pip install matplotlib
Once you've got it in here, you can import it just as we would anything else. Though by adding the optional as option at the end we can create a shorter alias for accessing its tools.
End of explanation
%matplotlib inline
Explanation: Before we'll get started, let's run one more trick to configure matplotlib to show its charts in our notebook.
End of explanation
top_supporters = support_table.groupby(
["FIRST_NAME", "LAST_NAME"]
)['AMOUNT'].sum().reset_index().sort_values("AMOUNT", ascending=False).head(10)
top_supporters
Explanation: Now let's save the data we want to chart into a variable
End of explanation
top_supporters['AMOUNT'].plot.bar()
top_supporters['AMOUNT'].plot.barh()
Explanation: Making a quick bar chart is as easy as this.
End of explanation
top_supporters.head(5)['AMOUNT'].plot.barh()
Explanation: It's really those first five that are the most interesting, so let's trim our chart.
End of explanation
chart = top_supporters.head(5)['AMOUNT'].plot.barh()
chart.set_yticklabels(top_supporters['LAST_NAME'])
Explanation: What are those y axis labels? Those are the row number (pandas calls them indexes) of each row. We don't want that. We want the names.
End of explanation
top_supporters.head(5)
Explanation: Okay, but what if I want to combine the first and last name?
First, make a new column. First let's look at what we have now.
End of explanation
print string
Explanation: In plain old Python, we created a string at the start of our less. Remember this?
End of explanation
print string + "World"
Explanation: Combining strings can be as easy as addition.
End of explanation
print string + " " + "World"
Explanation: And if we want to get a space in there yet we can do something like:
End of explanation
top_supporters['FULL_NAME'] = top_supporters['FIRST_NAME'] + " " + top_supporters['LAST_NAME']
Explanation: And guess what we can do the same thing with two columns in our table, and use a pandas trick that will apply it to every row.
End of explanation
top_supporters.head()
Explanation: Now let's see the results
End of explanation
chart = top_supporters.head(5)['AMOUNT'].plot.barh()
chart.set_yticklabels(top_supporters['FULL_NAME'])
Explanation: Now let's chart that.
End of explanation
top_supporters.head(5).to_csv("top_supporters.csv")
Explanation: That's all well and good, but this chart is pretty ugly. If you wanted to hand this data off to your graphics department, or try your hand at a simple chart yourself using something like Chartbuilder, you'd need to export this data into a spreadsheet.
It's this easy.
End of explanation |
9,019 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This tutorial takes you through the basics of analysing Mitty data with some help from cytoolz and pandas
Step1: Ex1
Step2: Ex2
Step3: Ex3
Step4: Ex2
More involved example showing, in sequence
Step5: The new concept here is the use of a dictionary of filters to supply to the categorization function. The result is stored in the all_counts and nr_counts dictionaries which need to be preallocated and passed to the counting function which modifes them.
Step6: Ex3
Alignment metrics plotting example
Read BAMs
Compute PairedAlignmentHistogram
Plot slices of the alignment
Step7: The plot suggests that the error in aligning mate1 is largely unrelated to the error in aligning mate2 except for some cases where both mates are off by about the same amount in the same direction (top, right and bottom left corners) or in opposite directions.
Step9: Ex4 | Python Code:
%load_ext autoreload
%autoreload 2
import time
import matplotlib.pyplot as plt
import cytoolz.curried as cyt
from bokeh.plotting import figure, show, output_file
import mitty.analysis.bamtoolz as bamtoolz
import mitty.analysis.bamfilters as mab
import mitty.analysis.plots as mapl
# import logging
# FORMAT = "[%(filename)s:%(lineno)s - %(funcName)20s() ] %(message)s"
# logging.basicConfig(format=FORMAT, level=logging.DEBUG)
# fname = '../../../mitty-demo-data/filter-demo-data/HG00119-bwa.bam'
# scar_fname = '../../../mitty-demo-data/generating-reads/HG00119-truncated-reads-corrupt-lq.txt'
# 180255 read pairs
fname = '../../../mitty-demo-data/alignment-accuracy/HG00119-bwa.bam'
scar_fname = '../../../mitty-demo-data/generating-reads/HG00119-reads-corrupt-lq.txt'
Explanation: This tutorial takes you through the basics of analysing Mitty data with some help from cytoolz and pandas
End of explanation
max_d = 200
n1 = mab.parse_read_qnames(scar_fname) # This gives us a curried function
n2 = mab.compute_derr(max_d=max_d)
f_dict = {
'd = 0': lambda mate: mate['d_err'] == 0,
'0 < d <= 50': lambda mate: 0 < mate['d_err'] <= 50,
'50 < d': lambda mate: 50 < mate['d_err'] <= max_d,
'WC': lambda mate: mate['d_err'] == max_d + 1,
'UM': lambda mate: mate['d_err'] == max_d + 2
}
n3 = mab.categorize_reads(f_dict)
n4 = mab.count_reads
pipeline = [n1, n2, n3, n4]
%%time
counts = cyt.pipe(
bamtoolz.read_bam_st(bam_fname=fname), *pipeline)
print(counts, sum(counts.values(), 0))
Explanation: Ex1: Simple processing pipeline
Simple example showing how to create a processing pipeline and run it using cytoolz.pipe
The components of the pipeline are
Parse read qname
Compute d_err
Categorize reads into some categories
Count 'em
End of explanation
%%time
counts = sum(
(c for c in bamtoolz.scatter(
pipeline, bam_fname=fname, ncpus=4)
), mab.Counter())
print(counts, sum(counts.values(), 0))
max_d = 200
n1 = mab.parse_read_qnames(scar_fname) # This gives us a curried function
n2 = mab.compute_derr(max_d=max_d)
f_dict = {
'd = 0': lambda mate: mate['d_err'] == 0,
'0 < d <= 50': lambda mate: 0 < mate['d_err'] <= 50,
'50 < d': lambda mate: 50 < mate['d_err'] <= max_d,
'WC': lambda mate: mate['d_err'] == max_d + 1,
'UM': lambda mate: mate['d_err'] == max_d + 2
}
n3 = mab.categorize_reads(f_dict)
n4 = mab.count_reads
pipeline = [n1, n2, n3, n4]
%%time
counts = sum((c for c in bamtoolz.scatter(pipeline, bam_fname=fname, paired=True, ncpus=2)), mab.Counter())
print(counts, sum(counts.values(), 0))
Explanation: Ex2: Single reads and scatter
We now take the original pipeline and pass it to scatter to perform the same operation, but in parallel
End of explanation
max_d = 200
n1 = mab.parse_read_qnames(scar_fname) # This gives us a curried function
n2 = mab.compute_derr(max_d=max_d)
p1 = mab.default_histogram_parameters() # The full 8D histogram
p1
p2 = mab.default_histogram_parameters() # We'll do a detailed dive into MQ and d_err here
p2['mq_bin_edges'] = list(range(62))
p2['v_bin_edges'] = None
p2['t_bin_edges'] = None
p2['xt_bin_edges'] = None
p2['v_bin_edges'] = None
n3 = mab.histograminator(histogram_def=[p1, p2])
pipeline = [n1, n2, n3]
%%time
h8 = cyt.pipe(
bamtoolz.read_bam_paired_st(bam_fname=fname),
*pipeline)
h8[0].attrs
h8[1].sum()
p2 = mab.collapse(h8[0], xd1=None, xd2=None)
mapl.plot_hist2D(p2)
plt.show()
p2 = mab.collapse(h8[1], mq1=None)
mapl.plot_hist1D(p2)
plt.show()
p2 = mab.collapse(h8[1], mq2=None)
mapl.plot_hist1D(p2)
plt.show()
p2 = mab.collapse(h8[1], mq1=None, mq2=None)
mapl.plot_hist2D(p2)
plt.show()
max_d = 200
n1 = mab.parse_read_qnames(scar_fname) # This gives us a curried function
n2 = mab.compute_derr(max_d=max_d)
p1 = mab.initialize_pah(name='FullHist')
n3 = mab.histogramize(pah=p1)
pipeline = [n1, n2, n3]
%%time
h8 = None
for h in bamtoolz.scatter(
pipeline,
bam_fname=fname, paired=True, ncpus=2):
print(h.sum())
if h8 is None:
h8 = h
else:
h8 += h
print(h8.sum())
p2 = mab.collapse(h8, xd1=None, xd2=None)
mapl.plot_hist(p2)
plt.show()
h8.sum()
h10.sum()
Explanation: Ex3: Alignment histogram
End of explanation
r1 = mab.read_bam(bam_fname=fname)
r2 = mab.make_pairs
r3 = mab.parse_read_qnames(scar_fname)
r4 = mab.compute_derr(max_d=200)
f_dict = {
'd = 0': lambda mate: mate['d_err'] == 0,
'0 < d <= 50': lambda mate: 0 < mate['d_err'] <= 50,
'50 < d': lambda mate: 50 < mate['d_err'] < 200,
'WC': lambda mate: 200 < mate['d_err'],
'UM': lambda mate: mate['read'].is_unmapped
}
r5 = mab.categorize_reads(f_dict)
all_counts = {}
r6 = mab.count_reads(all_counts)
r7 = mab.filter_reads(mab.non_ref(), all)
r8 = mab.categorize_reads(f_dict)
nr_counts = {}
r9 = mab.count_reads(nr_counts)
for r in cyt.pipe(r1, r2, r3, r4, r5, r6, r7, r8, r9):
pass
Explanation: Ex2
More involved example showing, in sequence:
Reading from a BAM
Pairing up the reads
Parse qnames (this is a simulated data set)
Compte d_err for the reads
Categorize reads based on d_err
Count reads in each category
Filter to keep non-reference reads only. Keep a pair only if both reads are non-ref
Re-Categorize reads based on d_err
Re-Count reads in each category
At the end the category counts are comparatively plotted.
End of explanation
mapl.plot_read_counts(ax=plt.subplot(1, 1, 1),
counts_l=[all_counts, nr_counts],
labels=['All', 'non-ref'],
keys=['d = 0', '0 < d <= 50', '50 < d', 'WC', 'UM'],
colors=None)
plt.show()
Explanation: The new concept here is the use of a dictionary of filters to supply to the categorization function. The result is stored in the all_counts and nr_counts dictionaries which need to be preallocated and passed to the counting function which modifes them.
End of explanation
max_d = 200
r1 = mab.read_bam(bam_fname=fname)
r2 = mab.make_pairs
r3 = mab.parse_read_qnames(scar_fname)
r4 = mab.compute_derr(max_d=200)
p1 = mab.initialize_pah(name='FullHist')
mab.histogramize(pah=p1)(cyt.pipe(r1, r2, cyt.take(100000), r3, r4))
#mab.save_pah(p1, 'test_hist.pklz')
p2 = mab.collapse(p1, xd1=None, xd2=None)
mapl.plot_hist(p2)
plt.show()
Explanation: Ex3
Alignment metrics plotting example
Read BAMs
Compute PairedAlignmentHistogram
Plot slices of the alignment
End of explanation
p2 = mab.collapse(p1, xd1=None, xd2=None)
mapl.plot_hist(p2)
plt.show()
p2 = mab.collapse(p1, mq1=None, mq2=None)
mapl.plot_hist(p2)
plt.show()
p2.attrs
p2 = mab.collapse(p1, xd1=None, mq1=None)
mapl.plot_hist(p2)
plt.show()
p1.coords['v1']
p2_r = mab.collapse(p1, xt=None, v1=(5, 6), v2=(5, 6))
p2_snp = mab.collapse(p1, xt=None, v1=(2, 3), v2=(5, 6))
plt.semilogy(p2_r/p2_r.max(), label='Ref')
plt.semilogy(p2_snp/p2_snp.max(), label='SNP(1)')
plt.legend(loc='best')
plt.xticks(range(p2_r.shape[0]), p2_r.coords['xt'].data, rotation='vertical')
plt.show()
p2_1 = mab.collapse(p1, xt=None, mq1=None)
p2_2 = mab.collapse(p1, xt=None, mq2=None)
fig, ax = plt.subplots(1, 2, figsize=(13, 5))
plt.subplots_adjust(wspace=0.5)
mapl.plot_hist(p2_1.T, ax[0])
mapl.plot_hist(p2_2.T, ax[1])
plt.show()
p2 = mab.collapse(p1, t=None, mq1=None)
mapl.plot_hist(p2)
plt.show()
p2 = mab.collapse(p1, mq1=None)
plt.semilogy(p2)
plt.xticks(range(p2.coords['mq1'].values.size), p2.coords['mq1'].values, rotation='vertical')
plt.show()
p2 = mab.collapse(p1, xd1=None, xd2=None, v1=(1, 2), v2=(5, 6))
mab.plot_hist(p2)
plt.show()
r1 = mab.read_bam(bam_fname=fname)
for r in cyt.pipe(r1, cyt.take(20)):
tostring(r[0]['read'])
read = r[0]['read']
read.tostring(mab.pysam.AlignmentFile(fname))
mab.pysam.AlignmentFile?
mab.plot_hist(p2)
plt.show()
p2.coords[p2.dims[0]].values
ax = plt.subplot(1,1,1)
mapl.plot_mean_MQ_vs_derr(ax=ax, dmv_mat=dmv_mat, fmt='yo', ms=5)
plt.show()
p1 = mab.initialize_pah()
len(p1.coords)
ax1 = plt.subplot(1,1,1)
mapl.plot_perr_vs_MQ(ax=ax1, dmv_mat=dmv_mat, yscale='log')
ax2 = ax1.twinx()
mapl.plot_read_count_vs_MQ(ax=ax2, dmv_mat=dmv_mat)
ax2.set_ylabel('Read count', color='r')
ax2.tick_params('y', colors='r')
ax1.legend(loc='lower right', fontsize=9)
plt.show()
for n, v_bin_label in enumerate(
['Ref', 'SNP', 'DEL <= 10']):
ax = plt.subplot(1, 3, n + 1)
mapl.hengli_plot(ax=ax, dmv_mat=dmv_mat, v_bin_label=v_bin_label)
plt.show()
Explanation: The plot suggests that the error in aligning mate1 is largely unrelated to the error in aligning mate2 except for some cases where both mates are off by about the same amount in the same direction (top, right and bottom left corners) or in opposite directions.
End of explanation
r1 = mab.read_bam(bam_fname=fname, sidecar_fname=scar_fname)
r2 = mab.compute_derr(max_d=200)
r3 = mab.to_df(tags=['NM'])
df = cyt.pipe(r1, r2, cyt.take(20), r3)
df
r1 = mab.read_bam(bam_fname=fname, sidecar_fname=scar_fname)
r2 = mab.compute_derr(max_d=200)
r3 = mab.make_pairs
r4 = mab.to_df(tags=['NM'])
df = cyt.pipe(r1, r2, r3, cyt.take(20), r4)
df
import io
import pysam
fp = pysam.AlignmentFile(fname)
fout_str = io.StringIO()
pysam.AlignmentFile(fout_str, 'r')
read.tostring(mab.pysam.AlignmentFile(fname)).split('\t')
def fromstring(s, ref_dict):
Inverse of pysam.AlignedSegment.tostring(): given a string, create an aligned segment
:param s:
:param ref_dict: ref_dict = {r: n for n, r in enumerate(fp.references)}
:return:
def _split(_s):
qname, flag, rname, pos, \
mapping_quality, cigarstring, \
rnext, pnext, template_length, seq, qual, *_tg = _s.split('\t')
flag = int(flag)
rname = ref_dict[rname]
pos = int(pos)
mapping_quality = int(mapping_quality)
rnext = ref_dict[rnext]
pnext = int(pnext)
template_length = int(template_length)
return qname, flag, rname, pos, \
mapping_quality, cigarstring, \
rnext, pnext, template_length, seq, qual, _tg
# So close, pysam.tostring, so close
def _tags(_t):
_tl = _t.split(':')
if _tl[1] == 'i':
_tl[2] = int(_tl[2])
elif _tl[1] == 'f':
_tl[2] = float(_tl[2])
return _tl[0], _tl[2], _tl[1]
r = pysam.AlignedSegment()
r.qname, r.flag, r.rname, r.pos, \
r.mapping_quality, r.cigarstring, \
r.rnext, r.pnext, r.template_length, r.seq, r.qual, tags = _split(s)
r.set_tags([_tags(t) for t in tags])
return r
read2 = fromstring(read.tostring(mab.pysam.AlignmentFile(fname)), ref_dict)
read2.tostring(mab.pysam.AlignmentFile(fname)).split()
read.tostring(mab.pysam.AlignmentFile(fname)).split()
fp = bamtoolz.pysam.AlignmentFile(fname)
fp.unmapped
import logging
logging.debug("hello world")
mab.save_histogram(p2, 'test_me.pklz')
p3 = mab.load_histogram('test_me.pklz')
p3.dims
p3.dims
p3
Explanation: Ex4: Dataframes
End of explanation |
9,020 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Toy weather data
Here is an example of how to easily manipulate a toy weather dataset using
xarray and other recommended Python libraries
Step1: Examine a dataset with pandas and seaborn
Convert to a pandas DataFrame
Step2: Visualize using pandas
Step3: Visualize using seaborn
Step4: Probability of freeze by calendar month
Step5: Monthly averaging
Step6: Note that MS here refers to Month-Start; M labels Month-End (the last day of the month).
Calculate monthly anomalies
In climatology, "anomalies" refer to the difference between observations and
typical weather for a particular season. Unlike observations, anomalies should
not show any seasonal cycle.
Step7: Calculate standardized monthly anomalies
You can create standardized anomalies where the difference between the
observations and the climatological monthly mean is
divided by the climatological standard deviation.
Step8: Fill missing values with climatology
The fillna method on grouped objects lets you easily fill missing values by group | Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import xarray as xr
np.random.seed(123)
xr.set_options(display_style="html")
times = pd.date_range("2000-01-01", "2001-12-31", name="time")
annual_cycle = np.sin(2 * np.pi * (times.dayofyear.values / 365.25 - 0.28))
base = 10 + 15 * annual_cycle.reshape(-1, 1)
tmin_values = base + 3 * np.random.randn(annual_cycle.size, 3)
tmax_values = base + 10 + 3 * np.random.randn(annual_cycle.size, 3)
ds = xr.Dataset(
{
"tmin": (("time", "location"), tmin_values),
"tmax": (("time", "location"), tmax_values),
},
{"time": times, "location": ["IA", "IN", "IL"]},
)
ds
Explanation: Toy weather data
Here is an example of how to easily manipulate a toy weather dataset using
xarray and other recommended Python libraries:
End of explanation
df = ds.to_dataframe()
df.head()
df.describe()
Explanation: Examine a dataset with pandas and seaborn
Convert to a pandas DataFrame
End of explanation
ds.mean(dim="location").to_dataframe().plot()
Explanation: Visualize using pandas
End of explanation
sns.pairplot(df.reset_index(), vars=ds.data_vars)
Explanation: Visualize using seaborn
End of explanation
freeze = (ds["tmin"] <= 0).groupby("time.month").mean("time")
freeze
freeze.to_pandas().plot()
Explanation: Probability of freeze by calendar month
End of explanation
monthly_avg = ds.resample(time="1MS").mean()
monthly_avg.sel(location="IA").to_dataframe().plot(style="s-")
Explanation: Monthly averaging
End of explanation
climatology = ds.groupby("time.month").mean("time")
anomalies = ds.groupby("time.month") - climatology
anomalies.mean("location").to_dataframe()[["tmin", "tmax"]].plot()
Explanation: Note that MS here refers to Month-Start; M labels Month-End (the last day of the month).
Calculate monthly anomalies
In climatology, "anomalies" refer to the difference between observations and
typical weather for a particular season. Unlike observations, anomalies should
not show any seasonal cycle.
End of explanation
climatology_mean = ds.groupby("time.month").mean("time")
climatology_std = ds.groupby("time.month").std("time")
stand_anomalies = xr.apply_ufunc(
lambda x, m, s: (x - m) / s,
ds.groupby("time.month"),
climatology_mean,
climatology_std,
)
stand_anomalies.mean("location").to_dataframe()[["tmin", "tmax"]].plot()
Explanation: Calculate standardized monthly anomalies
You can create standardized anomalies where the difference between the
observations and the climatological monthly mean is
divided by the climatological standard deviation.
End of explanation
# throw away the first half of every month
some_missing = ds.tmin.sel(time=ds["time.day"] > 15).reindex_like(ds)
filled = some_missing.groupby("time.month").fillna(climatology.tmin)
both = xr.Dataset({"some_missing": some_missing, "filled": filled})
both
df = both.sel(time="2000").mean("location").reset_coords(drop=True).to_dataframe()
df.head()
df[["filled", "some_missing"]].plot()
Explanation: Fill missing values with climatology
The fillna method on grouped objects lets you easily fill missing values by group:
End of explanation |
9,021 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with the Gluon Interface
In this example from the Github repository we will build and train a simple two-layer artificial neural network (ANN) called a multilayer perceptron.
First, we need to import mxnet and MXNet's implementation of the gluon specification.
We will also need autograd, ndarray, and numpy.
Step1: Next, we use gluon.data.DataLoader, Gluon's data iterator, to hold the training and test data.
Iterators are a useful object class for traversing through large datasets.
We pass Gluon's DataLoader a helper, gluon.data.vision.MNIST, that will pre-process the MNIST handwriting dataset, getting into the right size and format, using parameters to tell it which is test set and which is the training set.
Step2: Now, we are ready to define the actual neural network, and we can do so in five simple lines of code.
First, we initialize the network with net = gluon.nn.Sequential().
Then, with that net, we create three layers using gluon.nn.Dense
Step3: Prior to kicking off the model training process, we need to initialize the model’s parameters and set up the loss with gluon.loss.SoftmaxCrossEntropyLoss() and model optimizer functions with gluon.Trainer.
As with creating the model, these normally complicated functions are distilled to one line of code each.
Step4: Running the training is fairly typical and all the while using Gluon's functionality to make the process simple and seamless.
There are four steps | Python Code:
import mxnet as mx
from mxnet import gluon, autograd, ndarray
import numpy as np
Explanation: Getting Started with the Gluon Interface
In this example from the Github repository we will build and train a simple two-layer artificial neural network (ANN) called a multilayer perceptron.
First, we need to import mxnet and MXNet's implementation of the gluon specification.
We will also need autograd, ndarray, and numpy.
End of explanation
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=lambda data, label:
(data.astype(np.float32)/255, label)), batch_size=32, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=lambda data, label:
(data.astype(np.float32)/255, label)),batch_size=32, shuffle=False)
Explanation: Next, we use gluon.data.DataLoader, Gluon's data iterator, to hold the training and test data.
Iterators are a useful object class for traversing through large datasets.
We pass Gluon's DataLoader a helper, gluon.data.vision.MNIST, that will pre-process the MNIST handwriting dataset, getting into the right size and format, using parameters to tell it which is test set and which is the training set.
End of explanation
# Initialize the model:
net = gluon.nn.Sequential()
# Define the model architecture:
with net.name_scope():
# The first layer has 128 nodes:
net.add(gluon.nn.Dense(128, activation="relu"))
# The second layer has 64 nodes:
net.add(gluon.nn.Dense(64, activation="relu"))
# The output layer has 10 possible outputs:
net.add(gluon.nn.Dense(10))
Explanation: Now, we are ready to define the actual neural network, and we can do so in five simple lines of code.
First, we initialize the network with net = gluon.nn.Sequential().
Then, with that net, we create three layers using gluon.nn.Dense:
The first will have 128 nodes, and the second will have 64 nodes.
They both incorporate the relu by passing that into the activation function parameter.
The final layer for our model, gluon.nn.Dense(10), is used to set up the output layer with the number of nodes corresponding to the total number of possible outputs.
In our case with MNIST, there are only 10 possible outputs because the pictures represent numerical digits of which there are only 10 (i.e., 0 to 9).
End of explanation
# Begin with pseudorandom values for all of the model's parameters from a normal distribution
# with a standard deviation of 0.05:
net.collect_params().initialize(mx.init.Normal(sigma=0.05))
# Use the softmax cross entropy loss function to measure how well the model is able to predict
# the correct answer:
softmax_cross_entropy = gluon.loss.SoftmaxCrossEntropyLoss()
# Use stochastic gradient descent to train the model and set the learning rate hyperparameter to .1:
trainer = gluon.Trainer(net.collect_params(), 'sgd', {'learning_rate': .1})
Explanation: Prior to kicking off the model training process, we need to initialize the model’s parameters and set up the loss with gluon.loss.SoftmaxCrossEntropyLoss() and model optimizer functions with gluon.Trainer.
As with creating the model, these normally complicated functions are distilled to one line of code each.
End of explanation
epochs = 10
for e in range(epochs):
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(mx.cpu()).reshape((-1, 784))
label = label.as_in_context(mx.cpu())
# Start calculating and recording the derivatives:
with autograd.record():
# Optimize parameters -- Forward iteration:
output = net(data)
loss = softmax_cross_entropy(output, label)
loss.backward()
trainer.step(data.shape[0])
# Record statistics on the model's performance over each epoch:
curr_loss = ndarray.mean(loss).asscalar()
print("Epoch {}. Current Loss: {}.".format(e, curr_loss))
Explanation: Running the training is fairly typical and all the while using Gluon's functionality to make the process simple and seamless.
There are four steps:
1) pass in a batch of data;
2) calculate the difference between the output generated by the neural network model and the actual truth (i.e., the loss);
3) use Gluon's autograd to calculate the derivatives of the model’s parameters with respect to their impact on the loss;
4) use Gluon's trainer method to optimize the parameters in a way that will decrease the loss.
We set the number of epochs at 10, meaning that we will cycle through the entire training dataset 10 times.
End of explanation |
9,022 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create Schema
Using the # operator you can execute sql statements as a script
Note
Step1: Insert some data
Let's insert a single user into the database using the ! operator
Step2: Or insert a user and get the row id using the <! operator
Step3: Now lets add some blogs
Step4: Let's publish some blogs in bulk
Step5: Query some Data
Step7: Load and Run query from string | Python Code:
help(queries.create_schema)
queries.create_schema(conn)
Explanation: Create Schema
Using the # operator you can execute sql statements as a script
Note: Variable substitution is not possible
End of explanation
queries.add_user(conn, **{"username": "badger77", "firstname": "Mike", "lastname": "Jones"})
Explanation: Insert some data
Let's insert a single user into the database using the ! operator
End of explanation
userid = queries.add_user2(conn, **{"username": "honeybadger77", "firstname": "Micheal", "lastname": "Klandor"})
print(userid)
Explanation: Or insert a user and get the row id using the <! operator
End of explanation
blogid = queries.publish_blog2(conn, userid=userid, title="Hi", content="blah blah.")
print(blogid)
Explanation: Now lets add some blogs
End of explanation
blogs = [
{"userid": 1, "title": "First Blog", "content": "...", "published": dt.datetime(2018, 1, 1)},
{"userid": 1, "title": "Next Blog", "content": "...", "published": dt.datetime(2018, 1, 2)},
{"userid": 2, "title": "Hey, Hey!", "content": "...", "published": dt.datetime(2018, 7, 28)},
{"userid": 2, "title": "adipiscing fringilla", "content": "porttitor vulputate, posuere vulputate, lacus. Cras interdum.", "published": dt.datetime(2018, 7, 28)},
{"userid": 2, "title": "adipiscing fringilla", "content": "porttitor vulputate, posuere vulputate, lacus. Cras interdum.", "published": dt.datetime(2018, 7, 28)},
{"userid": 2, "title": "porttitor vulputate", "content": "posuere vulputate, lacus. Cras interdum.", "published": dt.datetime(2018, 7, 28)}
]
queries.bulk_publish(conn, blogs)
Explanation: Let's publish some blogs in bulk
End of explanation
queries.get_user_count(conn)
users = queries.get_users(conn)
pp.pprint(list(map(dict, users)), indent=2, compact=True, width=120)
queries.get_blog_count(conn)
blogs = queries.get_blogs(conn)
pp.pprint(list(map(dict, blogs)), indent=2, width=120)
Explanation: Query some Data
End of explanation
sql_str =
-- name: get_user_blogs
-- Get blogs with a fancy formatted published date and author field
select b.blogid,
b.title,
strftime('%Y-%m-%d %H:%M', b.published) as published,
u.username as author
from blogs b
inner join users u on b.userid = u.userid
where u.username = :username
order by b.published desc;
qstr = aiosql.from_str(sql_str, "sqlite3")
user_blogs = qstr.get_user_blogs(conn, username="honeybadger77")
pp.pprint(list(map(dict, user_blogs)), indent=2, width=120)
Explanation: Load and Run query from string
End of explanation |
9,023 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Probability distributions - 1
ToC
- Axioms of probability
- Conditional probability
- Bayesian conditional probability
- Random variables
- Properties of discrete random variables
- Binomial and Poisson discrete random variables
Axioms of probability
$$
0 \leq P(A) \leq 1
\
P(A) + P(\bar A) = 1
\
P(A \verb ! or ! B) = P(A) + P(B)
$$
Probability ranges from 0 to 1. The sum of P(A) and the opposite of A occuring is 1. For mutually exclusive events A and B, the Probability of either A or B ocurring is sum of their probabilities.
Mutually exclusive
Step1: Conditional Probability
Generally, conditional probability is more helpful in explaining a situtation than general probabilities.
Given two events A and B with non zero probabilities, then the probability of A occurring, given that B has occurs is
$$
P(A|B) = \frac{P(A \cap B)}{P(B)}
$$
and
$$
P(B|A) = \frac{P(A \cap B)}{P(A)}
$$
The $P(A/B)$ probability of A given that B occurs, is the probability of A and B occurring $P(A \cap B)$ to the probability of B occurring $P(B)$. Thus if A and B are mutually exclusive, then there is no conditional probability.
Example
Consider the case of insurance fraud. In table below, you are given insurance type and what rate of them are fraud claims. | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
from matplotlib_venn import venn2
venn2(subsets = (0.45, 0.15, 0.05), set_labels = ('A', 'B'))
Explanation: Probability distributions - 1
ToC
- Axioms of probability
- Conditional probability
- Bayesian conditional probability
- Random variables
- Properties of discrete random variables
- Binomial and Poisson discrete random variables
Axioms of probability
$$
0 \leq P(A) \leq 1
\
P(A) + P(\bar A) = 1
\
P(A \verb ! or ! B) = P(A) + P(B)
$$
Probability ranges from 0 to 1. The sum of P(A) and the opposite of A occuring is 1. For mutually exclusive events A and B, the Probability of either A or B ocurring is sum of their probabilities.
Mutually exclusive: Two events are considered mutually exclusive, if when an event is performed once, the occurrence of one of the events excludes the possibility of another.
For two independent events, probabilities of their union and intersection can be represented as
$$
P(A \cup B) = P(A) + P(B) - P(A \cap B)
$$
If A and B are mutually exclusive, then $P(A \cap B) = 0$
The reason we negate P(A intersection B) can be seen from the venn diagram below. Probabilities of A and B are (0.5 and 0.2). The probability of both A and B ocurring is 0.05. Thus to not double count the intersection, we negate it.
End of explanation
import pandas as pd
df = pd.DataFrame([[6,1,3,'Fradulent'],[14,29,47,'Not Fradulent']],
columns=['Fire', 'Auto','Other','Status'])
df
Explanation: Conditional Probability
Generally, conditional probability is more helpful in explaining a situtation than general probabilities.
Given two events A and B with non zero probabilities, then the probability of A occurring, given that B has occurs is
$$
P(A|B) = \frac{P(A \cap B)}{P(B)}
$$
and
$$
P(B|A) = \frac{P(A \cap B)}{P(A)}
$$
The $P(A/B)$ probability of A given that B occurs, is the probability of A and B occurring $P(A \cap B)$ to the probability of B occurring $P(B)$. Thus if A and B are mutually exclusive, then there is no conditional probability.
Example
Consider the case of insurance fraud. In table below, you are given insurance type and what rate of them are fraud claims.
End of explanation |
9,024 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Construct a 5x3 matrix, uninitialized
Step1: Construct a randomly initialized matrix
Step2: Construct a matrix filled zeros and of dtype long
Step3: Construct a tensor directly from data
Step4: or create a tensor based on an existing tensor. These methods will reuse properties of the input tensor, e.g. dtype, unless new values are provided by user
Step5: Addition
Step6: NumPy Bridge
Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
The Torch Tensor and NumPy array will share their underlying memory locations,
and changing one will change the other.
Converting a Torch Tensor to a NumPy Array
Step7: Converting NumPy Array to Torch Tensor
Step8: CUDA Tensors
Tensors can be moved onto any device using the .to method.
Step9: Autograd
Step10: Gradients
Let’s backprop now Because out contains a single scalar, out.backward() is equivalent to out.backward(torch.tensor(1))
Step11: You can also stop autograd from tracking history on Tensors with
.requires_grad=True by wrapping the code block inwith torch.no_grad()
Step12: Neural Networks
Neural networks can be constructed using the torch.nn package.
Now that you had a glimpse of autograd, nn depends on autograd to define models and
differentiate them. An nn.Module contains layers, and a method forward(input)that returns the output.
Step13: Note
torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are a mini-batch of samples, and not a single sample.
For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width.
If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension.
Step14: Update the weights
The simplest update rule used in practice is the Stochastic Gradient Descent (SGD)
Step15: Training a classifier
Loading and normalizing CIFAR10
Step16: Define a Convolution Neural Network
Step17: Define a Loss function and optimizer
Step18: Train the network
Step19: Test the network on the test data
We will check this by predicting the class label that the neural network outputs,
and checking it against the ground-truth. If the prediction is correct, we add the
sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
Step20: The outputs are energies for the 10 classes. Higher the energy for a class,
the more the network thinks that the image is of the particular class. So, let’s get the index of the highest energy
Step21: Let us look at how the network performs on the whole dataset.
Step22: Hmmm, what are the classes that performed well, and the classes that did not perform well
Step23: Training on GPU
Step24: Then these methods will recursively go over all modules and convert their parameters and buffers to CUDA tensors
Step25: Remember that you will have to send the inputs and targets at every step to the GPU too | Python Code:
x = torch.empty(5,3)
print(x)
Explanation: Construct a 5x3 matrix, uninitialized:
End of explanation
x = torch.rand(5,3)
print(x)
Explanation: Construct a randomly initialized matrix:
End of explanation
x = torch.zeros(5,3,dtype=torch.long)
print(x)
Explanation: Construct a matrix filled zeros and of dtype long:
End of explanation
x = torch.tensor([5.5,3])
print(x)
Explanation: Construct a tensor directly from data:
End of explanation
x = x.new_ones(5,3,dtype=torch.double) #new_* methods take in sizes
print(x)
x = torch.randn_like(x, dtype=torch.float) # override dtype!
print(x) # result has the same size
#Get its size:
print(x.size())
Explanation: or create a tensor based on an existing tensor. These methods will reuse properties of the input tensor, e.g. dtype, unless new values are provided by user
End of explanation
#syntax 1
y = torch.rand(5,3)
print(x + y)
#syntax 2
print(torch.add(x,y))
#providing an output tensor as argument
result = torch.empty(5,3)
torch.add(x,y,out=result)
print(result)
#in-place
#add x to y
y.add_(x)
print(y)
#Any operation that mutates a tensor in-place is post-fixed with an _.
#For example: x.copy_(y), x.t_(), will change x.
#You can use standard NumPy-like indexing with all bells and whistles!
print(x[:,1])
#Resizing: If you want to resize/reshape tensor, you can use torch.view:
x = torch.randn(4,4)
y = x.view(16)
z = x.view(-1,8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
#If you have a one element tensor, use .item() to get the value as a Python number
x = torch.randn(1)
print(x)
print(x.item())
Explanation: Addition:
End of explanation
a = torch.ones(5)
print(a)
b = a.numpy()
print(b)
#See how the numpy array changed in value.
a.add_(1)
print(a)
print(b)
Explanation: NumPy Bridge
Converting a Torch Tensor to a NumPy array and vice versa is a breeze.
The Torch Tensor and NumPy array will share their underlying memory locations,
and changing one will change the other.
Converting a Torch Tensor to a NumPy Array
End of explanation
import numpy as np
a = np.ones(5)
b = torch.from_numpy(a)
np.add(a,1,out=a)
print(a)
print(b)
Explanation: Converting NumPy Array to Torch Tensor
End of explanation
# let us run this cell only if CUDA is available
# We will use ``torch.device`` objects to move tensors in and out of GPU
if torch.cuda.is_available():
device = torch.device("cuda") # a CUDA device object
y = torch.ones_like(x, device=device) # directly create a tensor on GPU
x = x.to(device) # or just use strings ``.to("cuda")``
z = x + y
print(z)
print(z.to("cpu", torch.double)) # ``.to`` can also change dtype together!
Explanation: CUDA Tensors
Tensors can be moved onto any device using the .to method.
End of explanation
x = torch.ones(2,2,requires_grad = True)
print(x)
y = x + 2
print(y)
print(y.grad_fn) #y was created as a result of an operation, so it has a grad_fn.
z = y*y*3
out = z.mean()
print(z, out)
a = torch.randn(2,2)
a = ((a*3)/(a-1))
print(a.requires_grad)
a.requires_grad_(True)
print(a.requires_grad)
b = (a*a).sum()
print(b.grad_fn)
Explanation: Autograd: automatic differentiation
End of explanation
out.backward()
print(x.grad)
x = torch.randn(3,requires_grad = True)
y = x*2
while y.data.norm() < 1000:
y = y*2
print(y)
y
y.data.norm()
0.7772*0.7772+0.5340*0.5340+4.4317*4.4317
4.5309*4.5309
gradients = torch.tensor([0.1,1.0,0.001],dtype=torch.float)
y.backward(gradients)
print(x.grad)
Explanation: Gradients
Let’s backprop now Because out contains a single scalar, out.backward() is equivalent to out.backward(torch.tensor(1))
End of explanation
print(x.requires_grad)
print((x**2).requires_grad)
with torch.no_grad():
print((x**2).requires_grad)
Explanation: You can also stop autograd from tracking history on Tensors with
.requires_grad=True by wrapping the code block inwith torch.no_grad():
End of explanation
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 1 input image channel, 6 output channels, 5x5 square convolution kernel
self.conv1 = nn.Conv2d(1, 6, 5)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = F.max_pool2d(F.relu(self.conv1(x)), (2,2))
x = F.max_pool2d(F.relu(self.conv2(x)),(2))# If the size is a square you can only specify a single number
x = x.view(-1,self.num_flat_features(x))
x = F.relu((self.fc1(x)))
x = F.relu((self.fc2(x)))
x = self.fc3(x)
return x
def num_flat_features(self,x):
size = x.size()[1:]# all dimensions except the batch dimension
num_features = 1
for s in size:
num_features *= s
return num_features
net = Net()
print(net)
params = list(net.parameters())
print(len(params))
print(params[0].size())# conv1's .weight
for i in range(len(params)):
print(i,params[i].size())
input = torch.randn(1,1,32,32)
out = net(input)
print(out)
net.zero_grad()
out.backward(torch.randn(1,10))
Explanation: Neural Networks
Neural networks can be constructed using the torch.nn package.
Now that you had a glimpse of autograd, nn depends on autograd to define models and
differentiate them. An nn.Module contains layers, and a method forward(input)that returns the output.
End of explanation
output = net(input)
target = torch.randn(10)
target = target.view(1,-1)
criterion = nn.MSELoss()
loss = criterion(output,target)
print(loss)
print(loss.grad_fn)# MSELoss
print(loss.grad_fn.next_functions[0][0])# Linear
print(loss.grad_fn.next_functions[0][0].next_functions[0][0]) # ReLU
net.zero_grad()# zeroes the gradient buffers of all parameters
print('conv1.bias.grad before backward')
print(net.conv1.bias.grad)
loss.backward()
print('conv1.bias.grad after backward')
print(net.conv1.bias.grad)
Explanation: Note
torch.nn only supports mini-batches. The entire torch.nn package only supports inputs that are a mini-batch of samples, and not a single sample.
For example, nn.Conv2d will take in a 4D Tensor of nSamples x nChannels x Height x Width.
If you have a single sample, just use input.unsqueeze(0) to add a fake batch dimension.
End of explanation
learning_rate = 0.01
for f in net.parameters():
f.data.sub_(f.grad.data * learning_rate)
import torch.optim as optim
# create your optimizer
optimizer = optim.SGD(net.parameters(),lr=0.01)
#in your training loop:
optimizer.zero_grad()#zero the gradient buffers
output = net(input)
loss = criterion(output, target)
loss.backward()
optimizer.step() #does the update
Explanation: Update the weights
The simplest update rule used in practice is the Stochastic Gradient Descent (SGD):
weight = weight - learning_rate * gradient
We can implement this using simple python code:
End of explanation
import torch
import torchvision
import torchvision.transforms as transforms
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True, download = True, transform = transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2)
testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform = transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'forg', 'horse', 'ship', 'truck')
import matplotlib.pyplot as plt
import numpy as np
def imshow(img):
img = img/2 + 0.5 #unnormalize
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1,2,0)))
#get some random traning images
dataiter = iter(trainloader)
images, labels = dataiter.next()
#show images
imshow(torchvision.utils.make_grid(images))
#print labels
print(''.join('%5s' % classes[labels[j]] for j in range(4)))
Explanation: Training a classifier
Loading and normalizing CIFAR10
End of explanation
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16*5*5,120)
self.fc2 = nn.Linear(120,84)
self.fc3 = nn.Linear(84,10)
def forward(self,x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16*5*5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
Explanation: Define a Convolution Neural Network
End of explanation
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum = 0.9)
Explanation: Define a Loss function and optimizer
End of explanation
for epoch in range(2):
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
#get inputs
inputs, labels = data
#zero the params gridents
optimizer.zero_grad()
#forward,backward,optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
#print statistic
running_loss += loss.item()
if i%2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' % (epoch+1, i+1, running_loss /2000))
running_loss = 0.0
print('Finished Trianing')
Explanation: Train the network
End of explanation
dataiter = iter(testloader)
images, labels = dataiter.next()
#print images
imshow(torchvision.utils.make_grid(images))
print('GroundTruth:', ' '.join('%5s'%classes[labels[j]] for j in range(4)))
#Okay, now let us see what the neural network thinks these examples above are:
outputs = net(images)
Explanation: Test the network on the test data
We will check this by predicting the class label that the neural network outputs,
and checking it against the ground-truth. If the prediction is correct, we add the
sample to the list of correct predictions.
Okay, first step. Let us display an image from the test set to get familiar.
End of explanation
_, predicted = torch.max(outputs, 1)
print('Predicted: ', ' '.join('%5s' % classes[predicted[j]] for j in range(4)))
Explanation: The outputs are energies for the 10 classes. Higher the energy for a class,
the more the network thinks that the image is of the particular class. So, let’s get the index of the highest energy:
End of explanation
correct = 0
total = 0
with torch.no_grad():
for data in testloader:
images, labels = data
outputs = net(images)
_, predicted = torch.max(output.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the 10000 test images: %d %%' % (100 * correct / total))
Explanation: Let us look at how the network performs on the whole dataset.
End of explanation
class_correct = list(0. for i in range(10))
class_total = list(0. for i in range(10))
with torch.no_grad():
for data in testloader:
iamges, labels = data
outputs = net(images)
_, predicted = torch.max(outputs, 1)
c = (predicted == labels).squeeze()
for i in range(4):
label = labels[i]
class_correct[label] += c[i].item()
class_total[label] += 1
for i in range(10):
print('Accuracy of %5s : %2d %%' % (
classes[i], 100 * class_correct[i] / class_total[i]))
Explanation: Hmmm, what are the classes that performed well, and the classes that did not perform well:
End of explanation
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Assume that we are on a CUDA machine, then this should print a CUDA device:
print(device)
Explanation: Training on GPU
End of explanation
net.to(device)
Explanation: Then these methods will recursively go over all modules and convert their parameters and buffers to CUDA tensors:
End of explanation
inputs, labels = inputs.to(device), labels.to(device)
Explanation: Remember that you will have to send the inputs and targets at every step to the GPU too:
End of explanation |
9,025 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
from collections import Counter
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
word_counts = Counter(text)
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {index: word for index, word in enumerate(sorted_vocab)}
vocab_to_int = {word: index for index, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
token_dict = {}
token_dict['.'] = '||Period||'
token_dict[','] = '||Comma||'
token_dict['"'] = '||Quotation_Mark||'
token_dict[';'] = '||Semicolon||'
token_dict['!'] = '||Exclamation_Mark||'
token_dict['?'] = '||Question_Mark||'
token_dict['('] = '||Left_Parentheses||'
token_dict[')'] = '||Right_Parentheses||'
token_dict['--'] = '||Dash||'
token_dict['\n'] = '||Return||'
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name = "input")
targets = tf.placeholder(tf.int32, [None, None], name = "target")
learning_rate = tf.placeholder(tf.float32, name = "learning_rate")
return inputs, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm] * 1)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name = "initial_state")
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name="final_state")
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
output, final_state = build_rnn(cell, embed)
predictions = tf.contrib.layers.fully_connected(output, vocab_size, activation_fn=None)
return predictions, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
n_batches = int(len(int_text) // (batch_size * seq_length))
int_text_x = np.array(int_text[:n_batches * (batch_size * seq_length)])
int_text_y = np.append(np.array(int_text[1:n_batches * (batch_size * seq_length)]), int_text[0])
int_text_sequences = [int_text_x[i*seq_length:i*seq_length+seq_length] for i in range(0, n_batches * batch_size)]
int_text_targets = [int_text_y[i*seq_length:i*seq_length+seq_length] for i in range(0, n_batches * batch_size)]
output = []
for batch in range(n_batches):
inputs = []
targets = []
for size in range(batch_size):
inputs.append(int_text_sequences[size * n_batches + batch])
targets.append(int_text_targets[size * n_batches + batch])
output.append([inputs, targets])
return np.array(output)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 500
# RNN Size
rnn_size = 256
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 25
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 20
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input_tensor = loaded_graph.get_tensor_by_name(name = "input:0")
initialstate_tensor = loaded_graph.get_tensor_by_name(name = "initial_state:0")
finalstate_tensor = loaded_graph.get_tensor_by_name(name = "final_state:0")
probs_tensor = loaded_graph.get_tensor_by_name(name = "probs:0")
return input_tensor, initialstate_tensor, finalstate_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
9,026 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing tutor-student matching with rate-based simulations
Step1: Define target motor programs
Step2: Choose target
Step12: General definitions
Here we define some classes and functions that will be used to run all the simulations we are interested in.
Step15: Create the data for the mismatch matrix
Some generic code
We first define a function that can calculate this for a variety of conditions, such as 'sum' or 'push pull' motor controllers, constraining conductor--student weights to be positive, etc. Then we use this function for all the cases of interest.
Step16: Generate mismatch matrix for 'sum' controller
Step17: Generate bigger mismatch matrix for 'sum' controller
Step18: Generate mismatch matrix for 'pushpull' controller
Step19: Generate mismatch matrix when weights are constrained positive
With 'sum' controller.
Step20: Generate mismatch matrix when $\tau_{1,2}$ are small
Step21: Generate mismatch matrix when $\tau_{1,2}$ are large
Step24: Create credit mis-assignment data
Some generic code for credit mis-assignment
We first define a function that can calculate this for a variety of conditions, such as 'sum' or 'push pull' motor controllers, constraining conductor--student weights to be positive, etc. Then we use this function for all the cases of interest.
Step25: Generate credit mismatch data for 'sum' controller
Step26: Generate credit mismatch data for 'pushpull' controller, no subdivision
Step27: Generate credit mismatch data for 'pushpull' controller, subdivide by 2
Here we allow the excitatory and inhibitory contributions within the same output channel to be mismatched.
Step29: Make figures
Step30: Tutor-student mismatch heatmap and convergence map
Mismatch 'sum' controller
Step31: Mismatch 'sum' controller, bigger matrix
Step32: Mismatch 'pushpull' controller
Step33: Mismatch positive weights
Step34: Mismatch small $\tau_{1,2}$
Step35: Mismatch large $\tau_{1,2}$
Step38: Convergence plots
Step39: Convergence 'sum' controller
Short timescale (convergence 'sum')
Step40: Long timescale (convergence 'sum')
Step41: Convergence 'pushpull' controller
Short timescale (convergence 'pushpull')
Step42: Long timescale (convergence 'pushpull')
Step43: Convergence comparison 'sum' vs. 'pushpull'
'Sum' vs 'pushpull' short timescale
Step44: 'Sum' vs 'pushpull' long timescale
Step45: Effect of firing rate constraint
Firing rate constraint short timescale
Step46: Firing rate constraint long timescale
Step47: Stepwise learning with 'pushpull' controller
Step48: Stepwise learning
Step49: Credit misassignment figures
Credit misassignment figures definitions
Step50: Credit mismatch figures for 'sum' controller
Step51: Credit mismatch figures for 'pushpull' controller with no subdivision
Step52: Credit mismatch figures for 'pushpull' controller with subdivision by 2
Step55: Non-HVC-like conductor
Here we run some simulations in which the conductor can fire arbitrary patterns, and is not restricted to HVC-like firing in which each neuron fires a single burst during the whole duration of the motor program.
Step58: Non-linear (but monotonic) student--output relation
Step61: Convergence with different kernels | Python Code:
%matplotlib inline
import matplotlib as mpl
import matplotlib.ticker as mtick
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
plt.rc('text', usetex=True)
plt.rc('font', family='serif', serif='cm')
plt.rcParams['figure.titlesize'] = 10
plt.rcParams['axes.labelsize'] = 8
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['axes.labelpad'] = 3.0
from IPython.display import display, clear_output
from ipywidgets import FloatProgress
# comment out the next line if not working on a retina-display computer
import IPython
IPython.display.set_matplotlib_formats('retina')
import numpy as np
import copy
import time
import os
import cPickle as pickle
import simulation
from basic_defs import *
from helpers import *
Explanation: Testing tutor-student matching with rate-based simulations
End of explanation
tmax = 600.0 # duration of motor program (ms)
dt = 1.0 # simulation timestep (ms)
nsteps = int(tmax/dt)
times = np.arange(0, tmax, dt)
# add some noise, but keep things reproducible
np.random.seed(0)
target_complex = 100.0*np.vstack((
np.convolve(np.sin(times/100 + 0.1*np.random.randn(len(times)))**6 +
np.cos(times/150 + 0.2*np.random.randn(len(times)) + np.random.randn())**4,
np.exp(-0.5*np.linspace(-3.0, 3.0, 200)**2)/np.sqrt(2*np.pi)/80, mode='same'),
np.convolve(np.sin(times/110 + 0.15*np.random.randn(len(times)) + np.pi/3)**6 +
np.cos(times/100 + 0.2*np.random.randn(len(times)) + np.random.randn())**4,
np.exp(-0.5*np.linspace(-3.0, 3.0, 200)**2)/np.sqrt(2*np.pi)/80, mode='same'),
))
# or start with something simple: constant target
target_const = np.vstack((70.0*np.ones(len(times)), 50.0*np.ones(len(times))))
# or something simple but not trivial: steps
target_piece = np.vstack((
np.hstack((20.0*np.ones(len(times)/2), 100.0*np.ones(len(times)/2))),
np.hstack((60.0*np.ones(len(times)/2), 30.0*np.ones(len(times)/2)))))
targets = {'complex': target_complex, 'piece': target_piece, 'constant': target_const}
Explanation: Define target motor programs
End of explanation
# choose one target
target_choice = 'complex'
#target_choice = 'constant'
target = copy.copy(targets[target_choice])
# make sure the target smoothly goes to zero at the edges
# this is to match the spiking simulation, which needs some time to ramp
# up in the beginning and time to ramp down at the end
edge_duration = 100.0 # ms
edge_len = int(edge_duration/dt)
tapering_x = np.linspace(0.0, 1.0, edge_len, endpoint=False)
tapering = (3 - 2*tapering_x)*tapering_x**2
target[:, :edge_len] *= tapering
target[:, -edge_len:] *= tapering[::-1]
Explanation: Choose target
End of explanation
class ProgressBar(object):
A callable that displays a widget progress bar and can also make a plot showing
the learning trace.
def __init__(self, simulator, show_graph=True, graph_step=50, max_error=1000):
self.t0 = None
self.float = None
self.show_graph = show_graph
self.graph_step = graph_step
self.simulator = simulator
self.max_error = max_error
self.print_last = True
def __call__(self, i, n):
t = time.time()
if self.t0 is None:
self.t0 = t
t_diff = t - self.t0
current_res = self.simulator._current_res
text = 'step: {} ; time elapsed: {:.1f}s'.format(i, t_diff)
if len(current_res) > 0:
last_error = current_res[-1]['average_error']
if last_error <= self.max_error:
text += ' ; last error: {:.2f}'.format(last_error)
else:
text += ' ; last error: very large'
if self.float is None:
self.float = FloatProgress(min=0, max=100)
display(self.float)
else:
percentage = min(round(i*100.0/n), 100)
self.float.value = percentage
self.float.description = text
if self.show_graph and i % self.graph_step == 0:
crt_res = [_['average_error'] for _ in current_res]
plt.plot(range(len(crt_res)), crt_res, '.-k')
plt.xlim(0, n-1)
plt.xlabel('repetition')
plt.ylabel('error')
if len(crt_res) > 0:
if i < 100:
plt.ylim(np.min(crt_res) - 0.1, np.max(crt_res) + 0.1)
else:
plt.ylim(0, np.max(crt_res))
else:
plt.ylim(0, 1)
clear_output(wait=True)
if i < n:
display(plt.gcf())
if i == n:
self.float.close()
if self.print_last:
print(text)
# this defines the basic class used for running the rate-based simulations
class RateLearningSimulation(object):
A class that runs the rate-based simulation for several learning cycles.
def __init__(self, target, tmax, dt, n_conductor, n_student_per_output,
relaxation=400.0, relaxation_conductor=25.0,
tracker_generator=None, snapshot_generator=None,
conductor_burst_length=None,
conductor_from_table=None,
controller_mode='sum', controller_tau=25.0,
controller_mismatch_type='random', controller_mismatch_amount=0,
controller_error_map_function=None,
controller_mismatch_subdivide_by=1,
controller_nonlinearity=None,
tutor_rule_type='blackbox', tutor_rule_tau=0.0,
tutor_rule_gain=None, tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
tutor_rule_min_rate=0.0, tutor_rule_max_rate=160.0,
cs_weights_type='lognormal', cs_weights_params=(-3.57, 0.54),
cs_weights_scale=1.0, ts_weights=0.02,
plasticity_type='2exp',
plasticity_learning_rate=0.002, plasticity_params=(1.0, 0.0),
plasticity_taus=(80.0, 40.0), plasticity_constrain_positive=False):
Run the simulation for several learning cycles.
Arguments
---------
target: array (shape (Nmuscles, Nsteps))
Target output program.
tmax:
dt: float
Length and granularity of target program. `tmax` should be equal to `Nsteps * dt`,
where `Nsteps` is the number of columns of the `target` (see above).
n_conductor: int
Number of conductor neurons.
n_student_per_output: int
Number of student neurons per output channel. If `controller_mode` is not 'pushpull',
the actual number of student neurons is `n_student_per_output * Nmuscles`, where
`Nmuscles` is the number of rows of `target` (see above). If `controller_mode` is
`pushpull`, this is further multiplied by 2.
relaxation: float
Length of time that the simulation runs past the end of the `target`. This ensures
that all the contributions from the plasticity rule are considered.
relaxation_conductor: float
Length of time that the conductor fires past the end of the `target`. This is to avoid
excessive decay at the end of the program.
tracker_generator: callable
This function is called before every simulation run with the signature
`tracker_generator(simulator, i n)`
where `simulator` is the object running the simulations (i.e., `self`), `i` is the
index of the current learning cycle, and `n` is the total number of learning cycles
that will be simulated. The function should return a dictionary of objects such as
`StateMonitor`s and `EventMonitor`s that track the system during the simulation.
These objects will be returned in the results output structure after the run (see
the `run` method).
snapshot_generator: callable
This can be a function or a pair of functions. If it is a single function, it is called
before every simulation run with the signature
`snapshot_generator(simulator, i n)`
where `simulator` is the object running the simulations (i.e., `self`), `i` is the
index of the current learning cycle, and `n` is the total number of learning cycles
that will be simulated. The function should return a dictionary that will be appended
directly to the results output structure after the run (see the `run` method). This can
be used to make snapshots of various structures, such as the conductor--student weights,
during learning.
When this is a pair, both elements should be functions with the same signature as shown
above (or `None`). The first will be called before the simulation run, and the second
after.
conductor_burst_length: float, or None
Duration of conductor bursts. Set to `None` to have the next burst start where the previous
one ends.
conductor_from_table: None, or matrix (n_conductor x n_time_slices)
If not `None`, instead of using the `RateHVCLayer`, use the given table for the outputs
of conductor neurons as a function of time. Each column of the table corresponds to one
time 'slice', which has length given by `conductor_burst_length` (in ms).
controller_mode: str
The way in which the student--output weights should be initialized. This can be 'sum' or
'pushpull' (see `LinearController.__init__` for details).
controller_tau: float
Timescale for smoothing of output.
controller_mismatch_type: str
Method used to simulate credit assignment mismatch. Only possible option for now is
'random'.
controller_mismatch_amount: float
Fraction of student neurons whose output assignment is mismatched when the motor error
calculation is performed (this is used by the blackbox tutor rule).
controller_mismatch_subdivide_by: int
Number of subdivisions for each controller channel when performing the random mismatch.
Assignments between different subgroups can get shuffled as if they belonged to
different outputs.
controller_error_map_function: None, or function
If not `None`, use a (nonlinear) function to map the motor error into source error. This
can be used to handle non-quadratic loss functions (see `LinearController`).
controller_nonlinearity: None, or function
If not `None`, use a (nonlinear) function to map the weighted input to the output. This
can be used to implement linear-nonlinear controllers (see `LinearController`).
tutor_rule_type: str
Type of tutor rule to use. Currently this should be set to 'blackbox'.
tutor_rule_tau: float
Integration timescale for tutor signal (see `BlackboxTutorRule`).
tutor_rule_gain: float, or `None`
If not `None`, sets the gain for the blackbox tutor rule (see `BlackboxTutorRule`).
Either this or `tutor_rule_gain_per_student` should be non-`None`.
tutor_rule_gain_per_student: float, or `None`
If not `None`, the gain for the blackbox tutor rule (see `BlackboxTutorRule`) is set
proportional to the number of student neurons per output channel, `n_student_per_channel`.
tutor_rule_compress_rates: bool
Sets the `compress_rates` option for the blacktox tutor rule (see `BlackboxTutorRule`).
tutor_rule_min_rate: float
tutor_rule_max_rate: float
Sets the minimum and maximum rate for the tutor rule (see `BlackboxTutorRule`).
cs_weights_type: str
cs_weights_params:
Sets the way in which the conductor--student weights should be initialized. This can be
'zero': set the weights to zero
'constant': set all the weights equal to `cs_weights_params`
'normal': use Gaussian random variables, parameters (mean, st.dev.) given by
`cs_weights_params`
'lognormal': use log-normal random variables, parameters (mu, sigma) given by
`cs_weights_params`
cs_weights_scale: float
Set a scaling factor for all the conductor--student weights. This is applied after the
weights are calculated according to `cs_weights_type` and `cs_weights_params`.
ts_weights: float
The value of the tutor--student synaptic strength.
plasticity_type: str
Type of plasticity rule to use:
'2exp': use `TwoExponentialsPlasticity`
'exp_texp': use `SuperExponentialPlasticity`
plasticity_learning_rate: float
The learning rate of the plasticity rule (see `TwoExponentialsPlasticity`).
plasticity_params: (alpha, beta)
The parameters used by the plasticity rule (see `TwoExponentialsPlasticity`).
plasticity_taus: (tau1, tau2)
The timescales used by the plasticity rule (see `TwoExponentialsPlasticity`).
plasticity_constrain_positive: bool
Whether to keep conductor--student weights non-negative or not
(see `TwoExponentialsPlasticity`).
self.target = np.asarray(target)
self.tmax = float(tmax)
self.dt = float(dt)
self.n_muscles = len(self.target)
self.n_conductor = n_conductor
self.n_student_per_output = n_student_per_output
self.relaxation = relaxation
self.relaxation_conductor = relaxation_conductor
self.tracker_generator = tracker_generator
self.snapshot_generator = snapshot_generator
if not hasattr(self.snapshot_generator, '__len__'):
self.snapshot_generator = (self.snapshot_generator, None)
self.conductor_burst_length = conductor_burst_length
self.conductor_from_table = conductor_from_table
self.controller_mode = controller_mode
self.controller_tau = controller_tau
self.controller_mismatch_type = controller_mismatch_type
self.controller_mismatch_amount = controller_mismatch_amount
self.controller_mismatch_subdivide_by = controller_mismatch_subdivide_by
self.controller_error_map_function = controller_error_map_function
self.controller_nonlinearity = controller_nonlinearity
self.tutor_rule_type = tutor_rule_type
self.tutor_rule_tau = tutor_rule_tau
self.tutor_rule_gain = tutor_rule_gain
self.tutor_rule_gain_per_student = tutor_rule_gain_per_student
self.tutor_rule_compress_rates = tutor_rule_compress_rates
self.tutor_rule_min_rate = tutor_rule_min_rate
self.tutor_rule_max_rate = tutor_rule_max_rate
self.cs_weights_type = cs_weights_type
self.cs_weights_params = cs_weights_params
self.cs_weights_scale = cs_weights_scale
self.ts_weights = ts_weights
self.plasticity_type = plasticity_type
self.plasticity_learning_rate = plasticity_learning_rate
self.plasticity_params = plasticity_params
self.plasticity_taus = plasticity_taus
self.plasticity_constrain_positive = plasticity_constrain_positive
self.progress_indicator = ProgressBar(self)
self.setup()
def setup(self):
Create the components of the simulation.
# process some of the options
self.n_student = self.n_student_per_output*self.n_muscles
if self.controller_mode == 'pushpull':
self.n_student *= 2
if self.tutor_rule_gain is None:
self.tutor_rule_actual_gain = self.tutor_rule_gain_per_student*self.n_student_per_output
else:
self.tutor_rule_actual_gain = self.tutor_rule_gain
self.total_time = self.tmax + self.relaxation
self.stimes = np.arange(0, self.total_time, self.dt)
self._current_res = []
# build components
if self.conductor_from_table is None:
self.conductor = RateHVCLayer(self.n_conductor, burst_tmax=self.tmax+self.relaxation_conductor,
burst_length=self.conductor_burst_length)
else:
rep_count = (1 if self.conductor_burst_length is None else
int_r(self.conductor_burst_length/self.dt))
table = np.repeat(self.conductor_from_table, rep_count, axis=1)
self.conductor = TableLayer(table)
self.student = RateLayer(self.n_student)
self.motor = LinearController(self.student, self.target,
mode=self.controller_mode, tau=self.controller_tau)
if self.controller_mismatch_amount > 0:
if self.controller_mismatch_type != 'random':
raise Exception('Unknown controller_mismatch_type '+
str(self.controller_mismatch_type) + '.')
self.motor.set_random_permute_inverse(self.controller_mismatch_amount,
subdivide_by=self.controller_mismatch_subdivide_by)
self.motor.error_map_fct = self.controller_error_map_function
self.motor.nonlinearity = self.controller_nonlinearity
if self.tutor_rule_type != 'blackbox':
raise Exception('Unknown tutor_rule_type ' + str(self.tutor_rule_type) + '.')
self.tutor_rule = BlackboxTutorRule(self.motor, tau=self.tutor_rule_tau,
gain=self.tutor_rule_actual_gain,
compress_rates=self.tutor_rule_compress_rates,
min_rate=self.tutor_rule_min_rate,
max_rate=self.tutor_rule_max_rate)
# the blackbox tutor rule will wind down its activity during relaxation time
self.tutor_rule.relaxation = self.relaxation
# add synapses to student
self.student.add_source(self.conductor)
self.student.add_source(self.tutor_rule)
# generate the conductor--student weights
self.init_cs_weights()
# set tutor--student weights
self.student.Ws[1] = self.ts_weights*np.ones(self.n_student)
self.student.bias = -self.ts_weights*(self.tutor_rule_min_rate + self.tutor_rule_max_rate)/2.0
# initialize the plasiticity rule
if self.plasticity_type == '2exp':
self.plasticity = TwoExponentialsPlasticity(
(self.conductor, self.student, self.student.Ws[0]), self.tutor_rule,
rate=self.plasticity_learning_rate,
alpha=self.plasticity_params[0], beta=self.plasticity_params[1],
tau1=self.plasticity_taus[0], tau2=self.plasticity_taus[1],
constrain_positive=self.plasticity_constrain_positive)
elif self.plasticity_type == 'exp_texp':
self.plasticity = SuperExponentialPlasticity(
(self.conductor, self.student, self.student.Ws[0]), self.tutor_rule,
rate=self.plasticity_learning_rate,
alpha=self.plasticity_params[0], beta=self.plasticity_params[1],
tau1=self.plasticity_taus[0], tau2=self.plasticity_taus[1],
constrain_positive=self.plasticity_constrain_positive)
else:
raise Exception('Unknown plasticity_type ' + str(self.plasticity_type) + '.')
def init_cs_weights(self):
Initialize conductor--student weights.
if self.cs_weights_type == 'zero':
self.student.Ws[0] = np.zeros((self.n_student, self.n_conductor))
elif self.cs_weights_type == 'constant':
self.student.Ws[0] = self.cs_weights_params*np.ones((self.n_student, self.n_conductor))
elif self.cs_weights_type == 'normal':
self.student.Ws[0] = (self.cs_weights_params[0] +
self.cs_weights_params[1]*np.random.randn(self.n_student, self.n_conductor))
elif self.cs_weights_type == 'lognormal':
self.student.Ws[0] = np.random.lognormal(*self.cs_weights_params,
size=(self.n_student, self.n_conductor))
self.student.Ws[0] *= self.cs_weights_scale
def run(self, n_runs):
Run the simulation for `n_runs` learning cycles.
This function intercepts `KeyboardInterrupt` exceptions and returns the results up to
the time of the keyboard intercept.
res = []
self._current_res = res
try:
for i in xrange(n_runs):
if self.progress_indicator is not None:
self.progress_indicator(i, n_runs)
# make the pre-run snapshots
if self.snapshot_generator[0] is not None:
snaps_pre = self.snapshot_generator[0](self, i, n_runs)
else:
snaps_pre = {}
# get the trackers
if self.tracker_generator is not None:
trackers = self.tracker_generator(self, i, n_runs)
if trackers is None:
trackers = {}
else:
trackers = {}
# no matter what, we will need an error tracker to calculate average error
M_merr = MotorErrorTracker(self.motor)
# create and run the simulation
sim = simulation.Simulation(self.conductor, self.student, self.tutor_rule,
self.motor, self.plasticity,
M_merr, *trackers.values(), dt=self.dt)
sim.run(self.total_time)
# make the post-run snapshots
if self.snapshot_generator[1] is not None:
snaps_post = self.snapshot_generator[1](self, i, n_runs)
else:
snaps_post = {}
crt_res = {'average_error': np.mean(M_merr.overall_error)}
crt_res.update(snaps_pre)
crt_res.update(snaps_post)
crt_res.update(trackers)
res.append(crt_res)
if self.progress_indicator is not None:
self.progress_indicator(n_runs, n_runs)
except KeyboardInterrupt:
pass
return res
# sometimes we need to run a set of simulations in which some parameters change
def simulate_many(constant_params, variable_params):
Run a set of simulations.
Note that each run must have 'target', 'tmax', 'dt', and 'nreps' entries in the dictionary,
either coming from `constant_params` or from `variable_params`. The special entries
'graph_step' and 'show_graph' can be used to control the progress indicator for the
`RateLearningSimulation`s.
Arguments
---------
constant_params: dict
Dictionary holding those parameters that are constant for all simulations.
variable_params: array of dict
Array of dictionary holding the parameters that change between different simulations.
This can have any number of dimensions, and the output will match its shape.
Returns
-------
res_array: array of dict
Array of results for each of the simulations. Each dict has to entries,
'params': a dictionary of containing all the parameters for the simulation
'trace': the data resulting from that simulation, as returned by the
`RateLearningSimulation` object
'error_trace': the error trace (the 'average_error' values for all the entries in
`trace`)
This has the same shape as the `variable_params` argument.
# initialize the output
variable_params = np.asarray(variable_params)
res_array = np.empty(np.shape(variable_params), dtype='object')
n_sims = np.size(variable_params)
# display a progress bar
t0 = time.time()
bar = FloatProgress(min=0, max=100)
display(bar)
# process the data
for i in xrange(n_sims):
t = time.time()
t_diff = t - t0
text = 'simulation #{} ; time elapsed: {:.1f}s'.format(i, t_diff)
percentage = min(round(i*100.0/n_sims), 100)
bar.value = percentage
bar.description = text
current_params = dict(constant_params)
current_params.update(variable_params.ravel()[i])
current_target = current_params.pop('target')
current_tmax = current_params.pop('tmax')
current_dt = current_params.pop('dt')
current_n_reps = current_params.pop('n_reps')
current_graph_step = current_params.pop('graph_step', 50)
current_show_graph = current_params.pop('show_graph', True)
clear_output(wait=True)
plt.clf()
current_sim = RateLearningSimulation(
current_target, current_tmax, current_dt, **current_params)
current_sim.progress_indicator.graph_step = current_graph_step
current_sim.progress_indicator.show_graph = current_show_graph
current_sim.progress_indicator.print_last = False
current_res = current_sim.run(current_n_reps)
# don't keep the functions in the parameters
current_params.pop('tracker_generator', None)
current_params.pop('snapshot_generator', None)
# re-add n_reps, tmax, and dt; not target, because that can be too big
current_params['tmax'] = current_tmax
current_params['dt'] = current_dt
current_params['n_reps'] = current_n_reps
current_details = {
'params': current_params,
'trace': current_res,
'error_trace': np.asarray([_['average_error'] for _ in current_res])}
res_array.ravel()[i] = current_details
bar.value = 100.0
t = time.time()
t_diff = t - t0
bar.description = 'simulation done ; time elapsed: {:.1f}s'.format(t_diff)
return res_array
# tracker functions that will be used throughout the code
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['tutor'] = simulation.StateMonitor(simulator.tutor_rule, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
res['weights'] = np.copy(simulator.student.Ws[0])
return res
Explanation: General definitions
Here we define some classes and functions that will be used to run all the simulations we are interested in.
End of explanation
def generate_mismatch_data(tau_levels, n_reps, target, tmax, dt, **params):
Generate a matrix of results showing the effect of tutor-student mismatch.
The extra arguments give a dictionary of parameters that override the defaults. These are assumed
to be constant over all the simulations. Here the plasticity parameters are constrained
such that $alpha - beta = 1$.
# this is all we're tracking
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator(simulator, i, n):
res = {}
res['weights'] = np.copy(simulator.student.Ws[0])
return res
# update the default parameters using the arguments
default_params = dict(
show_graph=False,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None, controller_mode='sum', controller_tau=25.0,
tutor_rule_gain_per_student=0.5, tutor_rule_compress_rates=False,
tutor_rule_min_rate=0.0, tutor_rule_max_rate=160.0,
cs_weights_type='lognormal', cs_weights_params=(-3.57, 0.54), cs_weights_scale=200.0,
ts_weights=0.01,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_constrain_positive=False,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator
)
actual_params = dict(default_params)
actual_params.update(params)
# calculate the plasticity parameters for each tau
tau1, tau2 = actual_params['plasticity_taus']
# fix alpha - beta = 1
plasticity_levels = [(lambda _: (_, _-1))(float(tau - tau2)/(tau1 - tau2)) for tau in tau_levels]
actual_params['target'] = target
actual_params['tmax'] = tmax
actual_params['dt'] = dt
actual_params['n_reps'] = n_reps
n_levels = len(tau_levels)
return simulate_many(
actual_params,
[[dict(
tutor_rule_tau=current_tau_tutor,
plasticity_params=current_plastic
) for current_tau_tutor in tau_levels] for current_plastic in plasticity_levels]
)
Explanation: Create the data for the mismatch matrix
Some generic code
We first define a function that can calculate this for a variety of conditions, such as 'sum' or 'push pull' motor controllers, constraining conductor--student weights to be positive, etc. Then we use this function for all the cases of interest.
End of explanation
# reproducible randomness
np.random.seed(23782)
res_mismatch = generate_mismatch_data(10*2**np.arange(8), 250, target, tmax, dt)
file_name = 'save/rate_based_results_sum_log_8.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate mismatch matrix for 'sum' controller
End of explanation
# reproducible randomness
np.random.seed(23782)
res_mismatch = generate_mismatch_data(10*2**np.arange(12), 1000, target, tmax, dt,
relaxation=1200.0, relaxation_conductor=50.0)
file_name = 'save/rate_based_results_sum_log_12.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate bigger mismatch matrix for 'sum' controller
End of explanation
# reproducible randomness
np.random.seed(123134)
res_mismatch = generate_mismatch_data(10*2**np.arange(8), 250, target, tmax, dt,
controller_mode='pushpull')
file_name = 'save/rate_based_results_pushpull_log_8.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate mismatch matrix for 'pushpull' controller
End of explanation
# reproducible randomness
np.random.seed(34234)
res_mismatch = generate_mismatch_data(10*2**np.arange(8), 250, target, tmax, dt,
plasticity_constrain_positive=True)
file_name = 'save/rate_based_results_posweights_log_8.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate mismatch matrix when weights are constrained positive
With 'sum' controller.
End of explanation
# reproducible randomness
np.random.seed(4324)
res_mismatch = generate_mismatch_data(10*2**np.arange(8), 250, target, tmax, dt,
plasticity_taus=(20.0, 10.0))
file_name = 'save/rate_based_results_small_tau_log_8.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate mismatch matrix when $\tau_{1,2}$ are small
End of explanation
# reproducible randomness
np.random.seed(76476)
res_mismatch = generate_mismatch_data(10*2**np.arange(8), 250, target, tmax, dt,
plasticity_taus=(160.0, 80.0))
file_name = 'save/rate_based_results_large_tau_log_8.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate mismatch matrix when $\tau_{1,2}$ are large
End of explanation
def generate_credit_mismatch_data(rho_levels, n_reps, target, tmax, dt, **params):
Generate a vector of results showing the effect of credit assignment mismatch.
`rho_levels` gives the fraction of student neurons who output assignment will be
mismatched. The output will have the same shape as `rho_levels`. The extra arguments
give a dictionary of parameters that override the defaults. These are assumed to be
constant over all the simulations.
# this is all we're tracking
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
# update the default parameters using the arguments
default_params = dict(
show_graph=False,
n_conductor=100, n_student_per_output=40,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None, controller_mode='sum', controller_tau=25.0,
tutor_rule_gain_per_student=0.5, tutor_rule_compress_rates=False,
tutor_rule_tau=40.0,
tutor_rule_min_rate=0.0, tutor_rule_max_rate=160.0,
cs_weights_type='lognormal', cs_weights_params=(-3.57, 0.54), cs_weights_scale=200.0,
ts_weights=0.01,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1),
plasticity_constrain_positive=False,
tracker_generator=tracker_generator,
snapshot_generator=None
)
actual_params = dict(default_params)
actual_params.update(params)
actual_params['target'] = target
actual_params['tmax'] = tmax
actual_params['dt'] = dt
actual_params['n_reps'] = n_reps
return simulate_many(
actual_params,
[dict(
controller_mismatch_amount=current_rho
) for current_rho in rho_levels]
)
Explanation: Create credit mis-assignment data
Some generic code for credit mis-assignment
We first define a function that can calculate this for a variety of conditions, such as 'sum' or 'push pull' motor controllers, constraining conductor--student weights to be positive, etc. Then we use this function for all the cases of interest.
End of explanation
# reproducible randomness
np.random.seed(23872)
res_mismatch = generate_credit_mismatch_data(np.arange(0.0, 1.025, 0.025), 250, target, tmax, dt,
controller_mode='sum', snapshot_generator=None)
file_name = 'save/rate_based_credit_results_sum.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate credit mismatch data for 'sum' controller
End of explanation
# reproducible randomness
np.random.seed(8372847)
res_mismatch = generate_credit_mismatch_data(np.arange(0.0, 1.025, 0.025), 250, target, tmax, dt,
controller_mode='pushpull')
file_name = 'save/rate_based_credit_results_pushpull.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate credit mismatch data for 'pushpull' controller, no subdivision
End of explanation
# reproducible randomness
np.random.seed(8372847)
res_mismatch = generate_credit_mismatch_data(np.arange(0.0, 1.025, 0.025), 250, target, tmax, dt,
controller_mode='pushpull',
controller_mismatch_subdivide_by=2)
file_name = 'save/rate_based_credit_results_pushpull_subdiv.pkl'
if not os.path.exists(file_name):
with open(file_name, 'wb') as out:
pickle.dump({'res_mismatch': res_mismatch, 'target': target, 'tmax': tmax, 'dt': dt}, out, 2)
else:
raise Exception('File exists!')
Explanation: Generate credit mismatch data for 'pushpull' controller, subdivide by 2
Here we allow the excitatory and inhibitory contributions within the same output channel to be mismatched.
End of explanation
def compare_traces(res1, res2, target, colors=[[0.200, 0.357, 0.400], [0.831, 0.333, 0.000]],
labels=None, ymax=None, inset_ymax=None,
inset_label_size=8, inset_legend_pos=(1.1, 1.1)):
Make a plot comparing two convergence traces.
This shows the evolution of the error in the main plot, and a comparison of the final
output in an inset.
if labels is None:
labels = [None, None]
plt.plot([_['average_error'] for _ in res1], c=colors[0], label=labels[0])
plt.plot([_['average_error'] for _ in res2], c=colors[1], label=labels[1])
plt.xlabel('repetition')
plt.ylabel('error')
if ymax is not None:
plt.ylim(0, ymax)
else:
plt.ylim(0, plt.ylim()[1])
# if any(_ is not None for _ in labels):
# plt.legend()
inax = plt.axes([.4, .4, .4, .4])
motor1 = res1[-1]['motor']
motor2 = res2[-1]['motor']
nsteps = np.shape(target)[1]
times = motor1.t[:nsteps]
inax.plot(times, target[0], ':k', lw=4, label='target')
inax.spines['right'].set_color('none')
inax.spines['top'].set_color('none')
inax.plot(times, motor1.out[0, :nsteps], c=colors[0], label=labels[0])
inax.plot(times, motor2.out[0, :nsteps], c=colors[1], label=labels[1])
inax.set_xlabel('time')
inax.set_ylabel('output')
inax.set_xticks([])
inax.set_yticks([])
if any(_ is not None for _ in labels):
inax.legend(bbox_to_anchor=inset_legend_pos, fontsize=inset_label_size)
if inset_ymax is None:
inax.set_ylim(0, inax.get_ylim()[1])
else:
inax.set_ylim(0, inset_ymax)
Explanation: Make figures
End of explanation
file_name = 'save/rate_based_results_sum_log_8.pkl'
with open(file_name, 'rb') as inp:
res_mismatch = pickle.load(inp)['res_mismatch']
make_heatmap_plot(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_heatmap_sum_log_8', png=False)
make_convergence_map(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_convmap_sum_log_8', png=False)
Explanation: Tutor-student mismatch heatmap and convergence map
Mismatch 'sum' controller
End of explanation
file_name = 'save/rate_based_results_sum_log_12.pkl'
with open(file_name, 'rb') as inp:
res_mismatch = pickle.load(inp)['res_mismatch']
make_heatmap_plot(res_mismatch, vmin=0.5, vmax=10)
safe_save_fig('figs/ratebased_mismatch_heatmap_sum_log_12', png=False)
make_convergence_map(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_convmap_sum_log_12', png=False)
error_matrix = np.asarray([[_['error_trace'][-1] for _ in crt_res] for crt_res in res_mismatch])
error_matrix[~np.isfinite(error_matrix)] = np.inf
tau_levels = np.asarray([_['params']['tutor_rule_tau'] for _ in res_mismatch[0]])
plt.semilogx(tau_levels, np.diag(error_matrix), '.-k')
Explanation: Mismatch 'sum' controller, bigger matrix
End of explanation
file_name = 'save/rate_based_results_pushpull_log_8.pkl'
with open(file_name, 'rb') as inp:
res_mismatch = pickle.load(inp)['res_mismatch']
make_heatmap_plot(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_heatmap_pushpull_log_8', png=False)
make_convergence_map(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_convmap_pushpull_log_8', png=False)
Explanation: Mismatch 'pushpull' controller
End of explanation
file_name = 'save/rate_based_results_posweights_log_8.pkl'
with open(file_name, 'rb') as inp:
res_mismatch = pickle.load(inp)['res_mismatch']
make_heatmap_plot(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_heatmap_posweights_log_8', png=False)
make_convergence_map(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_convmap_posweights_log_8', png=False)
Explanation: Mismatch positive weights
End of explanation
file_name = 'save/rate_based_results_small_tau_log_8.pkl'
with open(file_name, 'rb') as inp:
res_mismatch = pickle.load(inp)['res_mismatch']
make_heatmap_plot(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_heatmap_small_tau_log_8', png=False)
make_convergence_map(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_convmap_small_tau_log_8', png=False)
Explanation: Mismatch small $\tau_{1,2}$
End of explanation
file_name = 'save/rate_based_results_large_tau_log_8.pkl'
with open(file_name, 'rb') as inp:
res_mismatch = pickle.load(inp)['res_mismatch']
make_heatmap_plot(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_heatmap_large_tau_log_8', png=False)
make_convergence_map(res_mismatch)
safe_save_fig('figs/ratebased_mismatch_convmap_large_tau_log_8', png=False)
Explanation: Mismatch large $\tau_{1,2}$
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
res['weights'] = np.copy(simulator.student.Ws[0])
return res
Explanation: Convergence plots
End of explanation
# keep things arbitrary but reproducible
np.random.seed(32292)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1.0),
tutor_rule_tau=40.0)
res = simulator.run(250)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res, plt.gca(), target_lw=2, extra_traces=[0, 4, 8],
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target)
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_convergence_plot_sum_small_tau', png=False)
make_convergence_movie('figs/ratebased_convergence_movie_sum_small_tau.mov',
res, target, idxs=range(0, 250), length=5.0,
ymax=80.0)
Explanation: Convergence 'sum' controller
Short timescale (convergence 'sum')
End of explanation
# keep things arbitrary but reproducible
np.random.seed(32290)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(7.0, 6.0),
tutor_rule_tau=320.0)
res = simulator.run(250)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res, plt.gca(), target_lw=2, extra_traces=[0, 4, 8],
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target)
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_convergence_plot_sum_large_tau')
make_convergence_movie('figs/ratebased_convergence_movie_sum_large_tau.mov',
res, target, idxs=range(0, 250), length=5.0,
ymax=80.0)
Explanation: Long timescale (convergence 'sum')
End of explanation
# keep things arbitrary but reproducible
np.random.seed(22292)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='pushpull',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1.0),
tutor_rule_tau=40.0)
res = simulator.run(250)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res, plt.gca(), target_lw=2, extra_traces=[0, 4, 8],
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target)
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_convergence_plot_pushpull_small_tau')
make_convergence_movie('figs/ratebased_convergence_movie_pushpull_small_tau.mov',
res, target, idxs=range(0, 250), length=5.0,
ymax=80.0)
Explanation: Convergence 'pushpull' controller
Short timescale (convergence 'pushpull')
End of explanation
# keep things arbitrary but reproducible
np.random.seed(32290)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='pushpull',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(7.0, 6.0),
tutor_rule_tau=320.0)
res = simulator.run(250)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res, plt.gca(), target_lw=2, extra_traces=[0, 4, 8],
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
inset_pos=[0.45, 0.45, 0.4, 0.4],
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target)
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_convergence_plot_pushpull_large_tau')
make_convergence_movie('figs/ratebased_convergence_movie_pushpull_large_tau.mov',
res, target, idxs=range(0, 250), length=5.0,
ymax=80.0)
Explanation: Long timescale (convergence 'pushpull')
End of explanation
# keep things arbitrary but reproducible
np.random.seed(212994)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1.0),
tutor_rule_tau=40.0)
res_sum = simulator.run(250)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='pushpull',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1.0),
tutor_rule_tau=40.0)
res_pushpull = simulator.run(250)
plt.figure(figsize=(6, 4))
compare_traces(res_sum, res_pushpull, target, labels=['sum', 'push-pull'], ymax=20, inset_ymax=80)
safe_save_fig('figs/ratebased_sum_vs_pushpull_short_tau', png=False)
Explanation: Convergence comparison 'sum' vs. 'pushpull'
'Sum' vs 'pushpull' short timescale
End of explanation
# keep things arbitrary but reproducible
np.random.seed(202094)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(7.0, 6.0),
tutor_rule_tau=320.0)
res_sum = simulator.run(250)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='pushpull',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(7.0, 6.0),
tutor_rule_tau=320.0)
res_pushpull = simulator.run(250)
plt.figure(figsize=(6, 4))
compare_traces(res_sum, res_pushpull, target, labels=['sum', 'push-pull'], ymax=20, inset_ymax=80)
safe_save_fig('figs/ratebased_sum_vs_pushpull_long_tau', png=False)
Explanation: 'Sum' vs 'pushpull' long timescale
End of explanation
# keep things arbitrary but reproducible
np.random.seed(8735)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1.0),
tutor_rule_tau=40.0)
res = simulator.run(250)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=True,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(0.0, -1.0),
tutor_rule_tau=40.0)
res_fc = simulator.run(250)
plt.figure(figsize=(3, 2))
compare_traces(res, res_fc, target, labels=['no constraint', 'with constraint'], ymax=20, inset_ymax=80,
inset_label_size=6, inset_legend_pos=(1.2, 1.1))
#safe_save_fig('figs/ratebased_rate_constraint_short_tau', png=False)
plt.figure(figsize=(4, 3))
compare_traces(res, res_fc, target, labels=['no constraint', 'with constraint'], ymax=20, inset_ymax=80)
safe_save_fig('figs/ratebased_rate_constraint_short_tau_square', png=False)
make_convergence_movie('figs/ratebased_rate_constraint_movie_short_tau.mov',
(res, res_fc), target, idxs=range(0, 250), length=5.0,
ymax=80.0, labels=['no constraint', 'with constraint'])
Explanation: Effect of firing rate constraint
Firing rate constraint short timescale
End of explanation
# keep things arbitrary but reproducible
np.random.seed(202094)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(7.0, 6.0),
tutor_rule_tau=320.0)
res = simulator.run(250)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=True,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(7.0, 6.0),
tutor_rule_tau=320.0)
res_fc = simulator.run(250)
plt.figure(figsize=(3, 2))
compare_traces(res, res_fc, target, labels=['no constraint', 'with constraint'], ymax=20, inset_ymax=80,
inset_label_size=6, inset_legend_pos=(1.2, 1.1))
safe_save_fig('figs/ratebased_rate_constraint_long_tau', png=False)
Explanation: Firing rate constraint long timescale
End of explanation
# keep things arbitrary but reproducible
np.random.seed(2294)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='pushpull',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=True,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(24.0, 23.0),
tutor_rule_tau=1000.0)
res_fc = simulator.run(250)
fig, axs = plt.subplots(1, 4, figsize=(6, 1.75))
show_program_development(res_fc, axs, stages=[0, 16, 33, 249], ymax=80, target=target)
fig.tight_layout()
safe_save_fig('figs/ratebased_rate_constraint_pushpull_stepwise', png=False)
Explanation: Stepwise learning with 'pushpull' controller
End of explanation
# keep things arbitrary but reproducible
np.random.seed(202094)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=True,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(24.0, 23.0),
tutor_rule_tau=1000.0)
res_fc = simulator.run(250)
fig, axs = plt.subplots(1, 4, figsize=(6, 1.75))
show_program_development(res_fc, axs, stages=[0, 16, 33, 249], ymax=80, target=target)
fig.tight_layout()
safe_save_fig('figs/ratebased_rate_constraint_stepwise', png=False)
fig, axs = plt.subplots(3, 1, figsize=(2, 4))
show_program_development(res_fc, axs, stages=[0, 16, 33], ymax=80, target=target, bbox_pos=(1.15, 1.15))
fig.tight_layout()
safe_save_fig('figs/ratebased_rate_constraint_stepwise_vertical', png=False)
make_convergence_movie('figs/ratebased_rate_constraint_stepwise_movie.mov',
res_fc, target, idxs=range(0, 200), length=10.0,
ymax=80.0)
Explanation: Stepwise learning
End of explanation
def make_credit_plot(res_mismatch):
rho_levels = [_['params']['controller_mismatch_amount'] for _ in res_mismatch]
final_error = [_['error_trace'][-1] for _ in res_mismatch]
plt.figure(figsize=(2.5, 2.25))
plt.plot(rho_levels, final_error, '.-', color=[0.200, 0.357, 0.400])
plt.grid(True)
plt.xlabel('fraction mismatch')
plt.ylabel('final error');
Explanation: Credit misassignment figures
Credit misassignment figures definitions
End of explanation
file_name = 'save/rate_based_credit_results_sum.pkl'
with open(file_name, 'rb') as inp:
inp_dict = pickle.load(inp)
res_mismatch = inp_dict['res_mismatch']
target = inp_dict['target']
tmax = inp_dict['tmax']
dt = inp_dict['dt']
make_credit_plot(res_mismatch)
plt.xlim(0, 0.5)
plt.ylim(0, 10)
safe_save_fig('figs/ratebased_credit_mismatch_sum', png=False)
Explanation: Credit mismatch figures for 'sum' controller
End of explanation
file_name = 'save/rate_based_credit_results_pushpull.pkl'
with open(file_name, 'rb') as inp:
inp_dict = pickle.load(inp)
res_mismatch = inp_dict['res_mismatch']
target = inp_dict['target']
tmax = inp_dict['tmax']
dt = inp_dict['dt']
make_credit_plot(res_mismatch)
plt.ylim(0, 10)
safe_save_fig('figs/ratebased_credit_mismatch_pushpull', png=False)
Explanation: Credit mismatch figures for 'pushpull' controller with no subdivision
End of explanation
file_name = 'save/rate_based_credit_results_pushpull_subdiv.pkl'
with open(file_name, 'rb') as inp:
inp_dict = pickle.load(inp)
res_mismatch = inp_dict['res_mismatch']
target = inp_dict['target']
tmax = inp_dict['tmax']
dt = inp_dict['dt']
make_credit_plot(res_mismatch)
plt.ylim(0, 10)
safe_save_fig('figs/ratebased_credit_mismatch_pushpull_subdiv', png=False)
Explanation: Credit mismatch figures for 'pushpull' controller with subdivision by 2
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
if i < 10 or i%10 == 0 or n - i <= 10:
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['tutor'] = simulation.StateMonitor(simulator.tutor_rule, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
if i < 10 or i%10 == 0 or n - i <= 10:
res['weights'] = np.copy(simulator.student.Ws[0])
return res
# keep things arbitrary but reproducible
np.random.seed(32234)
# average fraction of conductor neurons active at any given time
conductor_sparsity = 0.1
# size of burst
conductor_timescale = 30.0 # ms
n_conductor = 100
# generate a conductor activity pattern
conductor_steps = int_r(tmax/conductor_timescale)
conductor_pattern = np.zeros((n_conductor, conductor_steps))
conductor_pattern[np.random.rand(*conductor_pattern.shape) < conductor_sparsity] = 1.0/(
conductor_sparsity * float(n_conductor))
conductor_pattern = np.repeat(conductor_pattern, int_r(conductor_timescale/dt), axis=1)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=n_conductor, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=dt,
conductor_from_table=conductor_pattern,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='pushpull',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=True,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(5.0, 4.0),
tutor_rule_tau=240.0)
res = simulator.run(5000)
plt.figure(figsize=(3, 2))
idx = 0
sel_cond_out = res[idx]['conductor'].out[::5]
draw_multi_traces(res[idx]['conductor'].t, sel_cond_out,
color_fct=lambda i: ('k', [0.200, 0.357, 0.400])[i%2], edge_factor=1.4,
fill_alpha=0.5)
plt.xlim(0, tmax);
plt.yticks(plt.yticks()[0][::2], range(1, len(sel_cond_out), 2))
plt.xlabel('time (ms)')
plt.ylabel('conductor neuron index')
safe_save_fig('figs/ratebased_alt_conductor_pattern', png=False)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res[:3001], plt.gca(), target_lw=2,
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target,
inset_pos=[0.43, 0.4, 0.45, 0.45])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_alt_conductor_convergence', png=False)
make_convergence_movie('figs/ratebased_convergence_movie_alt_conductor.mov',
res, target, idxs=range(0, 3000), length=12,
ymax=80.0)
Explanation: Non-HVC-like conductor
Here we run some simulations in which the conductor can fire arbitrary patterns, and is not restricted to HVC-like firing in which each neuron fires a single burst during the whole duration of the motor program.
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['tutor'] = simulation.StateMonitor(simulator.tutor_rule, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
res['weights'] = np.copy(simulator.student.Ws[0])
return res
# keep things arbitrary but reproducible
np.random.seed(32292)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
controller_mode='sum',
controller_nonlinearity=lambda v:
100.0*np.exp((v-100.0)/50.0)/(1 + np.exp((v-100.0)/50.0)),
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
plasticity_learning_rate=0.001,
plasticity_taus=(80.0, 40.0),
plasticity_params=(24.0, 23.0),
tutor_rule_tau=1000.0)
res = simulator.run(500)
plot_evolution(res, target, dt)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res, plt.gca(), target_lw=2, extra_traces=[0, 10, 20],
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target,
inset_pos=[0.4, 0.4, 0.45, 0.45])
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_convergence_plot_sigmoidal_controller', png=False)
make_convergence_movie('figs/ratebased_convergence_movie_sigmoidal_controller.mov',
res, target, idxs=range(0, 500), length=10.0,
ymax=80.0)
Explanation: Non-linear (but monotonic) student--output relation
End of explanation
def tracker_generator(simulator, i, n):
Generate some trackers.
res = {}
res['motor'] = simulation.StateMonitor(simulator.motor, 'out')
res['tutor'] = simulation.StateMonitor(simulator.tutor_rule, 'out')
res['conductor'] = simulation.StateMonitor(simulator.conductor, 'out')
res['student'] = simulation.StateMonitor(simulator.student, 'out')
return res
def snapshot_generator_pre(simulator, i, n):
Generate some pre-run snapshots.
res = {}
res['weights'] = np.copy(simulator.student.Ws[0])
return res
# keep things arbitrary but reproducible
np.random.seed(32293)
simulator = RateLearningSimulation(target, tmax, dt,
n_conductor=100, n_student_per_output=1,
relaxation=400.0, relaxation_conductor=25.0,
conductor_burst_length=None,
tracker_generator=tracker_generator,
snapshot_generator=snapshot_generator_pre,
# controller_mode='pushpull',
controller_mode='sum',
tutor_rule_gain_per_student=0.5,
tutor_rule_compress_rates=False,
cs_weights_scale=200.0, ts_weights=0.01,
plasticity_constrain_positive=False,
# plasticity_learning_rate=0.002,
plasticity_type='exp_texp',
plasticity_learning_rate=0.001,
plasticity_taus=(40.0, 40.0),
plasticity_params=(24.0, 23.0),
tutor_rule_tau=1000.0)
res = simulator.run(200)
fig = plt.figure(figsize=(3, 2))
axs = draw_convergence_plot(res, plt.gca(), target_lw=2, #extra_traces=[0, 4, 8],
extra_colors=[[0.831, 0.333, 0.000, 0.25]], inset=True,
alpha=simulator.plasticity.alpha, beta=simulator.plasticity.beta,
tau_tutor=simulator.tutor_rule.tau, target=target)
axs[0].set_ylim(0, 15);
safe_save_fig('figs/ratebased_convergence_plot_alt_kernel', png=False)
make_convergence_movie('figs/ratebased_convergence_movie_alt_kernel.mov',
res, target, idxs=range(0, 200), length=4.0,
ymax=80.0)
Explanation: Convergence with different kernels
End of explanation |
9,027 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
RNNs tutorial
Step1: An LSTM/RNN overview
Step2: Note that when we create the builder, it adds the internal RNN parameters to the ParameterCollection.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
Step3: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer
Step4: The same interface that we saw until now for the LSTM, holds also for the Simple RNN
Step5: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it
Step6: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack
Step7: Aside
Step8: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
Step9: Character-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
Step10: Notice that
Step11: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM! (If you increase the number of layers, this difference will be even more pronounced)
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences. | Python Code:
# we assume that we have the dynet module in your path.
import dynet as dy
Explanation: RNNs tutorial
End of explanation
pc = dy.ParameterCollection()
NUM_LAYERS=2
INPUT_DIM=50
HIDDEN_DIM=10
builder = dy.LSTMBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
# or:
# builder = dy.SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
Explanation: An LSTM/RNN overview:
An (1-layer) RNN can be thought of as a sequence of cells, $h_1,...,h_k$, where $h_i$ indicates the time dimenstion.
Each cell $h_i$ has an input $x_i$ and an output $r_i$. In addition to $x_i$, cell $h_i$ receives as input also $r_{i-1}$.
In a deep (multi-layer) RNN, we don't have a sequence, but a grid. That is we have several layers of sequences:
$h_1^3,...,h_k^3$
$h_1^2,...,h_k^2$
$h_1^1,...h_k^1$,
Let $r_i^j$ be the output of cell $h_i^j$. Then:
The input to $h_i^1$ is $x_i$ and $r_{i-1}^1$.
The input to $h_i^2$ is $r_i^1$ and $r_{i-1}^2$,
and so on.
The LSTM (RNN) Interface
RNN / LSTM / GRU follow the same interface. We have a "builder" which is in charge of creating definining the parameters for the sequence.
End of explanation
s0 = builder.initial_state()
x1 = dy.vecInput(INPUT_DIM)
s1=s0.add_input(x1)
y1 = s1.output()
# here, we add x1 to the RNN, and the output we get from the top is y (a HIDEN_DIM-dim vector)
y1.npvalue().shape
s2=s1.add_input(x1) # we can add another input
y2=s2.output()
Explanation: Note that when we create the builder, it adds the internal RNN parameters to the ParameterCollection.
We do not need to care about them, but they will be optimized together with the rest of the network's parameters.
End of explanation
print(s2.h())
Explanation: If our LSTM/RNN was one layer deep, y2 would be equal to the hidden state. However, since it is 2 layers deep, y2 is only the hidden state (= output) of the last layer.
If we were to want access to the all the hidden state (the output of both the first and the last layers), we could use the .h() method, which returns a list of expressions, one for each layer:
End of explanation
# create a simple rnn builder
rnnbuilder=dy.SimpleRNNBuilder(NUM_LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
# initialize a new graph, and a new sequence
rs0 = rnnbuilder.initial_state()
# add inputs
rs1 = rs0.add_input(x1)
ry1 = rs1.output()
print("all layers:", s1.h())
print(s1.s())
Explanation: The same interface that we saw until now for the LSTM, holds also for the Simple RNN:
End of explanation
rnn_h = rs1.h()
rnn_s = rs1.s()
print("RNN h:", rnn_h)
print("RNN s:", rnn_s)
lstm_h = s1.h()
lstm_s = s1.s()
print("LSTM h:", lstm_h)
print("LSTM s:", lstm_s
)
Explanation: To summarize, when calling .add_input(x) on an RNNState what happens is that the state creates a new RNN/LSTM column, passing it:
1. the state of the current RNN column
2. the input x
The state is then returned, and we can call it's output() method to get the output y, which is the output at the top of the column. We can access the outputs of all the layers (not only the last one) using the .h() method of the state.
.s() The internal state of the RNN may be more involved than just the outputs $h$. This is the case for the LSTM, that keeps an extra "memory" cell, that is used when calculating $h$, and which is also passed to the next column. To access the entire hidden state, we use the .s() method.
The output of .s() differs by the type of RNN being used. For the simple-RNN, it is the same as .h(). For the LSTM, it is more involved.
End of explanation
s2=s1.add_input(x1)
s3=s2.add_input(x1)
s4=s3.add_input(x1)
# let's continue s3 with a new input.
s5=s3.add_input(x1)
# we now have two different sequences:
# s0,s1,s2,s3,s4
# s0,s1,s2,s3,s5
# the two sequences share parameters.
assert(s5.prev() == s3)
assert(s4.prev() == s3)
s6=s3.prev().add_input(x1)
# we now have an additional sequence:
# s0,s1,s2,s6
s6.h()
s6.s()
Explanation: As we can see, the LSTM has two extra state expressions (one for each hidden layer) before the outputs h.
Extra options in the RNN/LSTM interface
Stack LSTM The RNN's are shaped as a stack: we can remove the top and continue from the previous state.
This is done either by remembering the previous state and continuing it with a new .add_input(), or using
we can access the previous state of a given state using the .prev() method of state.
Initializing a new sequence with a given state When we call builder.initial_state(), we are assuming the state has random /0 initialization. If we want, we can specify a list of expressions that will serve as the initial state. The expected format is the same as the results of a call to .final_s(). TODO: this is not supported yet.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
states = state.add_inputs(xs)
outputs = [s.output() for s in states]
hs = [s.h() for s in states]
print(outputs, hs)
Explanation: Aside: memory efficient transduction
The RNNState interface is convenient, and allows for incremental input construction.
However, sometimes we know the sequence of inputs in advance, and care only about the sequence of
output expressions. In this case, we can use the add_inputs(xs) method, where xs is a list of Expression.
End of explanation
state = rnnbuilder.initial_state()
xs = [x1,x1,x1]
outputs = state.transduce(xs)
print(outputs)
Explanation: This is convenient.
What if we do not care about .s() and .h(), and do not need to access the previous vectors? In such cases
we can use the transduce(xs) method instead of add_inputs(xs).
transduce takes in a sequence of Expressions, and returns a sequence of Expressions.
As a consequence of not returning RNNStates, trnasduce is much more memory efficient than add_inputs or a series of calls to add_input.
End of explanation
import random
from collections import defaultdict
from itertools import count
import sys
LAYERS = 1
INPUT_DIM = 50
HIDDEN_DIM = 50
characters = list("abcdefghijklmnopqrstuvwxyz ")
characters.append("<EOS>")
int2char = list(characters)
char2int = {c:i for i,c in enumerate(characters)}
VOCAB_SIZE = len(characters)
pc = dy.ParameterCollection()
srnn = dy.SimpleRNNBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
lstm = dy.LSTMBuilder(LAYERS, INPUT_DIM, HIDDEN_DIM, pc)
# add parameters for the hidden->output part for both lstm and srnn
params_lstm = {}
params_srnn = {}
for params in [params_lstm, params_srnn]:
params["lookup"] = pc.add_lookup_parameters((VOCAB_SIZE, INPUT_DIM))
params["R"] = pc.add_parameters((VOCAB_SIZE, HIDDEN_DIM))
params["bias"] = pc.add_parameters((VOCAB_SIZE))
# return compute loss of RNN for one sentence
def do_one_sentence(rnn, params, sentence):
# setup the sentence
dy.renew_cg()
s0 = rnn.initial_state()
R = params["R"]
bias = params["bias"]
lookup = params["lookup"]
sentence = ["<EOS>"] + list(sentence) + ["<EOS>"]
sentence = [char2int[c] for c in sentence]
s = s0
loss = []
for char,next_char in zip(sentence,sentence[1:]):
s = s.add_input(lookup[char])
probs = dy.softmax(R*s.output() + bias)
loss.append( -dy.log(dy.pick(probs,next_char)) )
loss = dy.esum(loss)
return loss
# generate from model:
def generate(rnn, params):
def sample(probs):
rnd = random.random()
for i,p in enumerate(probs):
rnd -= p
if rnd <= 0: break
return i
# setup the sentence
dy.renew_cg()
s0 = rnn.initial_state()
R = params["R"]
bias = params["bias"]
lookup = params["lookup"]
s = s0.add_input(lookup[char2int["<EOS>"]])
out=[]
while True:
probs = dy.softmax(R*s.output() + bias)
probs = probs.vec_value()
next_char = sample(probs)
out.append(int2char[next_char])
if out[-1] == "<EOS>": break
s = s.add_input(lookup[next_char])
return "".join(out[:-1]) # strip the <EOS>
# train, and generate every 5 samples
def train(rnn, params, sentence):
trainer = dy.SimpleSGDTrainer(pc)
for i in range(200):
loss = do_one_sentence(rnn, params, sentence)
loss_value = loss.value()
loss.backward()
trainer.update()
if i % 5 == 0:
print("%.10f" % loss_value, end="\t")
print(generate(rnn, params))
Explanation: Character-level LSTM
Now that we know the basics of RNNs, let's build a character-level LSTM language-model.
We have a sequence LSTM that, at each step, gets as input a character, and needs to predict the next character.
End of explanation
sentence = "a quick brown fox jumped over the lazy dog"
train(srnn, params_srnn, sentence)
sentence = "a quick brown fox jumped over the lazy dog"
train(lstm, params_lstm, sentence)
Explanation: Notice that:
1. We pass the same rnn-builder to do_one_sentence over and over again.
We must re-use the same rnn-builder, as this is where the shared parameters are kept.
2. We dy.renew_cg() before each sentence -- because we want to have a new graph (new network) for this sentence.
The parameters will be shared through the model and the shared rnn-builder.
End of explanation
train(srnn, params_srnn, "these pretzels are making me thirsty")
Explanation: The model seem to learn the sentence quite well.
Somewhat surprisingly, the Simple-RNN model learn quicker than the LSTM! (If you increase the number of layers, this difference will be even more pronounced)
How can that be?
The answer is that we are cheating a bit. The sentence we are trying to learn
has each letter-bigram exactly once. This means a simple trigram model can memorize
it very well.
Try it out with more complex sequences.
End of explanation |
9,028 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hands-on!
Nessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark.
Apriorando o Word Count Memória Postumas de Brás Cubas
Memórias Póstumas de Brás Cubas é um romance escrito por Machado de Assis, desenvolvido em princípio como folhetim, de março a dezembro de 1880, na Revista Brasileira, para, no ano seguinte, ser publicado como livro, pela então Tipografia Nacional.
A obra retrata a escravidão, as classes sociais, o cientificismo e o positivismo da época. Dada essas informações, será que conseguimos idenficar essas características pelas palavras mais utilizadas em sua obra?
Utilizando o dataset Machado-de-Assis-Memorias-Postumas.txt, elabore um pipeline utilizando Estimators e Transformers necessário do Spark para responder as perguntas abaixo. Não esqueça de utilizar stopwords.pt para remover as stop words!
Quais as 10 palavras mais frequentes?
Quais as 2-grams e 3-grams mais frequentes?
Step1: 10 Palavras Mais Frequentes
Step2: n-Grams
Step3: TF-IDF com CountVectorizer
No exemplo TFIDF, atualize a cell 15 para utilizar o Transformer CountVectorizer. | Python Code:
# Bibliotecas
from pyspark.ml import Pipeline
from pyspark.ml.feature import Tokenizer, StopWordsRemover, CountVectorizer, NGram
livro = sc.textFile("Machado-de-Assis-Memorias-Postumas.txt")
text = ""
for line in livro.collect():
text += " " + line
data = spark.createDataFrame([(0, text)], ["id", "text"])
Explanation: Hands-on!
Nessa prática, sugerimos alguns pequenos exemplos para você implementar sobre o Spark.
Apriorando o Word Count Memória Postumas de Brás Cubas
Memórias Póstumas de Brás Cubas é um romance escrito por Machado de Assis, desenvolvido em princípio como folhetim, de março a dezembro de 1880, na Revista Brasileira, para, no ano seguinte, ser publicado como livro, pela então Tipografia Nacional.
A obra retrata a escravidão, as classes sociais, o cientificismo e o positivismo da época. Dada essas informações, será que conseguimos idenficar essas características pelas palavras mais utilizadas em sua obra?
Utilizando o dataset Machado-de-Assis-Memorias-Postumas.txt, elabore um pipeline utilizando Estimators e Transformers necessário do Spark para responder as perguntas abaixo. Não esqueça de utilizar stopwords.pt para remover as stop words!
Quais as 10 palavras mais frequentes?
Quais as 2-grams e 3-grams mais frequentes?
End of explanation
tokenizer = Tokenizer(inputCol="text", outputCol="words")
remover = StopWordsRemover(inputCol="words", outputCol="filtered")
count = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=10)
pipeline = Pipeline(stages=[tokenizer, remover, count])
model = pipeline.fit(data).transform(data)
model.select("features").show(truncate=False)
Explanation: 10 Palavras Mais Frequentes
End of explanation
tokenizer = Tokenizer(inputCol="text", outputCol="words")
remover = StopWordsRemover(inputCol="words", outputCol="filtered")
ngram = NGram(inputCol="filtered", outputCol="ngrams", n=2)
count = CountVectorizer(inputCol="ngrams", outputCol="features", vocabSize=10)
pipeline = Pipeline(stages=[tokenizer, remover, ngram, count])
model = pipeline.fit(data).transform(data)
model.select("features").show(truncate=False)
ngram = NGram(inputCol="filtered", outputCol="ngrams", n=3)
pipeline = Pipeline(stages=[tokenizer, remover, ngram, count])
model = pipeline.fit(data).transform(data)
model.select("features").show(truncate=False)
Explanation: n-Grams
End of explanation
countVect = CountVectorizer(inputCol="rawFeatures", outputCol="features")
countVectModel = countVect.fit(featurizedData)
rescaledData = countVectModel.transform(featurizedData)
Explanation: TF-IDF com CountVectorizer
No exemplo TFIDF, atualize a cell 15 para utilizar o Transformer CountVectorizer.
End of explanation |
9,029 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ne pas faire un "execute all"
Step1: Exercice
Step2: TensorFlow (II)
La même chose mais une session interactive, ce qui nous permet d'être un peu plus lâche dans l'ordre d'opérations.
Step3: TensorFlow (III)
Construisons un modèle plus soutenu
Step4: Première couche
C'est une convolution de stride 1 et zero padded, donc la sortie a la même taille que l'entrée. On suit la convolution du max pooling.
La convolution calculera 32 critères pour chaque carré (patch) de 5$\times$ 5. Le tenseur de poids est de dimension $5\times 5\times 1\times 32$. Les dimensions sont (2) patch size, (1) nombres de chenals d'entrée, et (1) le nombre de chenals de sortie.
Afin de l'appliquer, nous reformerons notre image ($28\times 28$) à un tenseur 4-D. Le quatrième dimension est le nombre de couleurs dans l'image.
Finalement, nous passons par la convolution de l'image avec le poid, on ajoute le biais, passe par le ReLU et puis le max pool.
Step5: Deuxième couche
Step6: Troisième couche
Densely connected layer. L'image est réduite à $7\times 7$. Nous ajoutons une couche entièrement connectée avec 1024 neurones afin de traiter l'image entière.
Nous reformons le tenseur de la couche précédente ("pooling layer") dans une collection de vecteur, on multiplie pare la matrice de poids, ajoute un biais, et passe par un ReLU.
Step7: Dropout
Afin de réduire le surapprentissage, nous ajoutons un dropout avant la sortie finale. Le placeholder représente la probabilité qu'un neuronne est gardé pendant la phase de dropout. Ainsi on peut utiliser le dropout en apprentissage mais pas en classification.
Step8: Readout
Step9: Apprentissage et évaluation
Ici c'est presque la même chose qu'avec softmax. Les différences principales | Python Code:
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Explanation: Ne pas faire un "execute all" : la dernière cellule est très lourde.
Introduction à TensorFlow
Ce code est basé sur des tutoriel à tensorflow.org.
Nous allons utiliser _softmax regression pour MNIST.
La première chose à faire est de récupérer les données.
End of explanation
import tensorflow as tf
# Ici None veut dire "de n'importe quelle longueur".
x = tf.placeholder(tf.float32, [None, 784])
# Nous allons apprendre W et b, donc on s'inquiète pas
# de leur valeurs maintenant.
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
# Notre modèle.
y = tf.nn.softmax(tf.matmul(x, W) + b)
# Cross-entropy, cf. théorie de l'information.
y_ = tf.placeholder(tf.float32, [None, 10])
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),
reduction_indices=[1]))
# En fait, ça manque de stabilité numérique, dans la vrai vie pensez
# à utiliser tf.nn.(sparse_)softmax_cross_entropy_with_logits().
# Optimisation, au choix.
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
# Initialiser toutes les variables.
init = tf.initialize_all_variables()
# Jusqu'ici nous n'avons rien fait. Il ne s'agit que déclarations.
# Maintenant on déclenche le calcul.
sess = tf.Session()
sess.run(init)
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
# tf.argmax nous donne la valeur la plus importante sur une axe
# d'un tenseur. Donc tf.argmax(y, 1) est la labelle que notre
# modèle trouve le plus probable pour chaque entrèe, et
# tf.argmax(y_, 1) est la labelle correcte.
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images,
y_: mnist.test.labels}))
Explanation: Exercice : explorer les données (mnist). Qu'est-ce que vous trouvez, comprenez ?
Softmax regression
Softmax est simplement une généralisation de la fonction logit (sigmoid) qui garantie qu'un vecteur finit entre 0 et 1. Dit différemment, l'indice qu'une entrée $x$ est dans class $i$ est
$$\mbox{evidence}i = \sum_j W{i,j} x_j + b_i$$
avec
$$\mbox{softmax}(x) = \mbox{normalize}(e^x) \iff \mbox{softmax}(x)_i = \frac{e^{x_i}}{\sum_j e^{x_j}}$$
Souvent on écrit simplement
$$y = \mbox{softmax}(Wx+b)$$
Cross-entropy
$$H_{y'}(y) = -\sum_i y_i' \log(y_i)$$
TensorFlow (I)
End of explanation
import tensorflow as tf
sess2 = tf.InteractiveSession()
x2 = tf.placeholder(tf.float32, shape=[None, 784])
y_2 = tf.placeholder(tf.float32, shape=[None, 10])
W2 = tf.Variable(tf.zeros([784,10]))
b2 = tf.Variable(tf.zeros([10]))
sess2.run(tf.initialize_all_variables())
y2 = tf.matmul(x2, W2) + b2
cross_entropy2 = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(y2, y_2))
train_step2 = tf.train.GradientDescentOptimizer(0.5).minimize(
cross_entropy2)
for i in range(1000):
batch = mnist.train.next_batch(100)
train_step2.run(feed_dict={x2: batch[0], y_2: batch[1]})
correct_prediction2 = tf.equal(tf.argmax(y2, 1), tf.argmax(y_2, 1))
accuracy2 = tf.reduce_mean(tf.cast(correct_prediction2, tf.float32))
print(accuracy2.eval(feed_dict={x2: mnist.test.images,
y_2: mnist.test.labels}))
Explanation: TensorFlow (II)
La même chose mais une session interactive, ce qui nous permet d'être un peu plus lâche dans l'ordre d'opérations.
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x, W):
return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1], padding='SAME')
x3 = tf.placeholder(tf.float32, shape=[None, 784])
y_3 = tf.placeholder(tf.float32, shape=[None, 10])
Explanation: TensorFlow (III)
Construisons un modèle plus soutenu : un multilayer convolutional network.
Nous avons vu des neurones de type rectified linear, $f(x) = max(0, x)$. Un tel neurone s'appelle un ReLU.
On peut le rendre différentiable ainsi : $ f(x) = \ln\left(1+e^x\right)$.
Remarques :
* On leur donne un biais légèrement positif afin d'éviter des neurones "morts".
* On ajoute un peut de bruit pour briser symétrie ("symmetry breaking").
End of explanation
W_conv1 = weight_variable([5, 5, 1, 32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x3, [-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
Explanation: Première couche
C'est une convolution de stride 1 et zero padded, donc la sortie a la même taille que l'entrée. On suit la convolution du max pooling.
La convolution calculera 32 critères pour chaque carré (patch) de 5$\times$ 5. Le tenseur de poids est de dimension $5\times 5\times 1\times 32$. Les dimensions sont (2) patch size, (1) nombres de chenals d'entrée, et (1) le nombre de chenals de sortie.
Afin de l'appliquer, nous reformerons notre image ($28\times 28$) à un tenseur 4-D. Le quatrième dimension est le nombre de couleurs dans l'image.
Finalement, nous passons par la convolution de l'image avec le poid, on ajoute le biais, passe par le ReLU et puis le max pool.
End of explanation
W_conv2 = weight_variable([5, 5, 32, 64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
Explanation: Deuxième couche
End of explanation
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
Explanation: Troisième couche
Densely connected layer. L'image est réduite à $7\times 7$. Nous ajoutons une couche entièrement connectée avec 1024 neurones afin de traiter l'image entière.
Nous reformons le tenseur de la couche précédente ("pooling layer") dans une collection de vecteur, on multiplie pare la matrice de poids, ajoute un biais, et passe par un ReLU.
End of explanation
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
Explanation: Dropout
Afin de réduire le surapprentissage, nous ajoutons un dropout avant la sortie finale. Le placeholder représente la probabilité qu'un neuronne est gardé pendant la phase de dropout. Ainsi on peut utiliser le dropout en apprentissage mais pas en classification.
End of explanation
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
Explanation: Readout
End of explanation
sess3 = tf.Session()
# num_iterations = 20 * 1000
num_iterations = 800
cross_entropy3 = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
y_conv, y_3))
train_step3 = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy3)
correct_prediction3 = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_3,1))
accuracy3 = tf.reduce_mean(tf.cast(correct_prediction3, tf.float32))
sess3.run(tf.initialize_all_variables())
for i in range(num_iterations):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy3.eval(feed_dict={
x3: batch[0], y_3: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step3.run(feed_dict={x3: batch[0], y_3: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x3: mnist.test.images,
y_3: mnist.test.labels,
keep_prob: 1.0}))
Explanation: Apprentissage et évaluation
Ici c'est presque la même chose qu'avec softmax. Les différences principales :
* On remplace la descente du gradient avec ADAM.
* On ajoute keep_prob au feed_dict.
* On ajoute un message de logging toutes les 100 itéreation.
Attention : le code suivant fait 20K itérations et risque de prendre un moment (approx. 30 minutes) !
End of explanation |
9,030 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 4
Imports
Step2: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$
Step5: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function
Step6: Use interact to explore the plot_random_line function using | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 4
Imports
End of explanation
def random_line(m, b, sigma, size=10):
Create a line y = m*x + b + N(0,sigma**2) between x=[-1.0,1.0]
Parameters
----------
m : float
The slope of the line.
b : float
The y-intercept of the line.
sigma : float
The standard deviation of the y direction normal distribution noise.
size : int
The number of points to create for the line.
Returns
-------
x : array of floats
The array of x values for the line with `size` points.
y : array of floats
The array of y values for the lines with `size` points.
# YOUR CODE HERE
#raise NotImplementedError()
x = np.linspace(-1.0,1.0,size)
if sigma==0:
y=m*x+b
else:
#np.random.normal() creates normal distribution array
y = (m*x)+b+np.random.normal(0.0, sigma**2, size)
return x,y
m = 0.0; b = 1.0; sigma=0.0; size=3
x, y = random_line(m, b, sigma, size)
assert len(x)==len(y)==size
assert list(x)==[-1.0,0.0,1.0]
assert list(y)==[1.0,1.0,1.0]
sigma = 1.0
m = 0.0; b = 0.0
size = 500
x, y = random_line(m, b, sigma, size)
assert np.allclose(np.mean(y-m*x-b), 0.0, rtol=0.1, atol=0.1)
assert np.allclose(np.std(y-m*x-b), sigma, rtol=0.1, atol=0.1)
Explanation: Line with Gaussian noise
Write a function named random_line that creates x and y data for a line with y direction random noise that has a normal distribution $N(0,\sigma^2)$:
$$
y = m x + b + N(0,\sigma^2)
$$
Be careful about the sigma=0.0 case.
End of explanation
def ticks_out(ax):
Move the ticks to the outside of the box.
ax.get_xaxis().set_tick_params(direction='out', width=1, which='both')
ax.get_yaxis().set_tick_params(direction='out', width=1, which='both')
def plot_random_line(m, b, sigma, size=10, color='red'):
Plot a random line with slope m, intercept b and size points.
# YOUR CODE HERE
#raise NotImplementedError()
x,y=random_line(m, b, sigma, size)
plt.scatter(x,y,color=color)
plt.xlim(-1.1,1.1)
plt.ylim(-10.0,10.0)
plt.box(False)
plt.xlabel('x')
plt.ylabel('y(x)')
plt.title('Random Line')
plt.tick_params(axis='y', right='off', direction='out')
plt.tick_params(axis='x', top='off', direction='out')
plt.grid(True)
plot_random_line(5.0, -1.0, 2.0, 50)
assert True # use this cell to grade the plot_random_line function
Explanation: Write a function named plot_random_line that takes the same arguments as random_line and creates a random line using random_line and then plots the x and y points using Matplotlib's scatter function:
Make the marker color settable through a color keyword argument with a default of red.
Display the range $x=[-1.1,1.1]$ and $y=[-10.0,10.0]$.
Customize your plot to make it effective and beautiful.
End of explanation
# YOUR CODE HERE
#raise NotImplementedError()
interact(plot_random_line, m=(-10.0,10.0), b=(-5.0,5.0),sigma=(0.0,5.0,0.01),size=(10,100,10), color={'red':'r','blue':'b','green':'g'})
#### assert True # use this cell to grade the plot_random_line interact
Explanation: Use interact to explore the plot_random_line function using:
m: a float valued slider from -10.0 to 10.0 with steps of 0.1.
b: a float valued slider from -5.0 to 5.0 with steps of 0.1.
sigma: a float valued slider from 0.0 to 5.0 with steps of 0.01.
size: an int valued slider from 10 to 100 with steps of 10.
color: a dropdown with options for red, green and blue.
End of explanation |
9,031 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualizing optimization results
Tim Head, August 2016.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule
Step1: Toy models
We will use two different toy models to demonstrate how
Step2: Starting with branin
To start let's take advantage of the fact that
Step3: Evaluating the objective function
Next we use an extra trees based minimizer to find one of the minima of the
Step4:
Step5: The two dimensional partial dependence plot can look like the true
objective but it does not have to. As points at which the objective function
is being evaluated are concentrated around the suspected minimum the
surrogate model sometimes is not a good representation of the objective far
away from the minima.
Random sampling
Compare this to a minimizer which picks points at random. There is no
structure visible in the order in which it evaluates the objective. Because
there is no model involved in the process of picking sample points at
random, we can not plot the partial dependence of the model.
Step6: Working in six dimensions
Visualising what happens in two dimensions is easy, where
Step7: Going from 6 to 6+2 dimensions
To make things more interesting let's add two dimension to the problem.
As | Python Code:
print(__doc__)
import numpy as np
np.random.seed(123)
import matplotlib.pyplot as plt
Explanation: Visualizing optimization results
Tim Head, August 2016.
Reformatted by Holger Nahrstaedt 2020
.. currentmodule:: skopt
Bayesian optimization or sequential model-based optimization uses a surrogate
model to model the expensive to evaluate objective function func. It is
this model that is used to determine at which points to evaluate the expensive
objective next.
To help understand why the optimization process is proceeding the way it is,
it is useful to plot the location and order of the points at which the
objective is evaluated. If everything is working as expected, early samples
will be spread over the whole parameter space and later samples should
cluster around the minimum.
The :class:plots.plot_evaluations function helps with visualizing the location and
order in which samples are evaluated for objectives with an arbitrary
number of dimensions.
The :class:plots.plot_objective function plots the partial dependence of the objective,
as represented by the surrogate model, for each dimension and as pairs of the
input dimensions.
All of the minimizers implemented in skopt return an OptimizeResult
instance that can be inspected. Both :class:plots.plot_evaluations and :class:plots.plot_objective
are helpers that do just that
End of explanation
from skopt.benchmarks import branin as branin
from skopt.benchmarks import hart6 as hart6_
# redefined `hart6` to allow adding arbitrary "noise" dimensions
def hart6(x):
return hart6_(x[:6])
Explanation: Toy models
We will use two different toy models to demonstrate how :class:plots.plot_evaluations
works.
The first model is the :class:benchmarks.branin function which has two dimensions and three
minima.
The second model is the hart6 function which has six dimension which makes
it hard to visualize. This will show off the utility of
:class:plots.plot_evaluations.
End of explanation
from matplotlib.colors import LogNorm
def plot_branin():
fig, ax = plt.subplots()
x1_values = np.linspace(-5, 10, 100)
x2_values = np.linspace(0, 15, 100)
x_ax, y_ax = np.meshgrid(x1_values, x2_values)
vals = np.c_[x_ax.ravel(), y_ax.ravel()]
fx = np.reshape([branin(val) for val in vals], (100, 100))
cm = ax.pcolormesh(x_ax, y_ax, fx,
norm=LogNorm(vmin=fx.min(),
vmax=fx.max()))
minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]])
ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14,
lw=0, label="Minima")
cb = fig.colorbar(cm)
cb.set_label("f(x)")
ax.legend(loc="best", numpoints=1)
ax.set_xlabel("$X_0$")
ax.set_xlim([-5, 10])
ax.set_ylabel("$X_1$")
ax.set_ylim([0, 15])
plot_branin()
Explanation: Starting with branin
To start let's take advantage of the fact that :class:benchmarks.branin is a simple
function which can be visualised in two dimensions.
End of explanation
from functools import partial
from skopt.plots import plot_evaluations
from skopt import gp_minimize, forest_minimize, dummy_minimize
bounds = [(-5.0, 10.0), (0.0, 15.0)]
n_calls = 160
forest_res = forest_minimize(branin, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res, bins=10)
Explanation: Evaluating the objective function
Next we use an extra trees based minimizer to find one of the minima of the
:class:benchmarks.branin function. Then we visualize at which points the objective is being
evaluated using :class:plots.plot_evaluations.
End of explanation
from skopt.plots import plot_objective
_ = plot_objective(forest_res)
Explanation: :class:plots.plot_evaluations creates a grid of size n_dims by n_dims.
The diagonal shows histograms for each of the dimensions. In the lower
triangle (just one plot in this case) a two dimensional scatter plot of all
points is shown. The order in which points were evaluated is encoded in the
color of each point. Darker/purple colors correspond to earlier samples and
lighter/yellow colors correspond to later samples. A red point shows the
location of the minimum found by the optimization process.
You should be able to see that points start clustering around the location
of the true miminum. The histograms show that the objective is evaluated
more often at locations near to one of the three minima.
Using :class:plots.plot_objective we can visualise the one dimensional partial
dependence of the surrogate model for each dimension. The contour plot in
the bottom left corner shows the two dimensional partial dependence. In this
case this is the same as simply plotting the objective as it only has two
dimensions.
Partial dependence plots
Partial dependence plots were proposed by
[Friedman (2001)]
as a method for interpreting the importance of input features used in
gradient boosting machines. Given a function of $k$: variables
$y=f\left(x_1, x_2, ..., x_k\right)$: the
partial dependence of $f$ on the $i$-th variable $x_i$ is calculated as:
$\phi\left( x_i \right) = \frac{1}{N} \sum^N{j=0}f\left(x_{1,j}, x_{2,j}, ..., x_i, ..., x_{k,j}\right)$:
with the sum running over a set of $N$ points drawn at random from the
search space.
The idea is to visualize how the value of $x_j$: influences the function
$f$: after averaging out the influence of all other variables.
End of explanation
dummy_res = dummy_minimize(branin, bounds, n_calls=n_calls, random_state=4)
_ = plot_evaluations(dummy_res, bins=10)
Explanation: The two dimensional partial dependence plot can look like the true
objective but it does not have to. As points at which the objective function
is being evaluated are concentrated around the suspected minimum the
surrogate model sometimes is not a good representation of the objective far
away from the minima.
Random sampling
Compare this to a minimizer which picks points at random. There is no
structure visible in the order in which it evaluates the objective. Because
there is no model involved in the process of picking sample points at
random, we can not plot the partial dependence of the model.
End of explanation
bounds = [(0., 1.),] * 6
forest_res = forest_minimize(hart6, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res)
_ = plot_objective(forest_res, n_samples=40)
Explanation: Working in six dimensions
Visualising what happens in two dimensions is easy, where
:class:plots.plot_evaluations and :class:plots.plot_objective start to be useful is when the
number of dimensions grows. They take care of many of the more mundane
things needed to make good plots of all combinations of the dimensions.
The next example uses class:benchmarks.hart6 which has six dimensions and shows both
:class:plots.plot_evaluations and :class:plots.plot_objective.
End of explanation
bounds = [(0., 1.),] * 8
n_calls = 200
forest_res = forest_minimize(hart6, bounds, n_calls=n_calls,
base_estimator="ET", random_state=4)
_ = plot_evaluations(forest_res)
_ = plot_objective(forest_res, n_samples=40)
# .. [Friedman (2001)] `doi:10.1214/aos/1013203451 section 8.2 <http://projecteuclid.org/euclid.aos/1013203451>`
Explanation: Going from 6 to 6+2 dimensions
To make things more interesting let's add two dimension to the problem.
As :class:benchmarks.hart6 only depends on six dimensions we know that for this problem
the new dimensions will be "flat" or uninformative. This is clearly visible
in both the placement of samples and the partial dependence plots.
End of explanation |
9,032 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-esm2-sr5', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-ESM2-SR5
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
9,033 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This is the demonstration how to use NEDO data process utility
Step1: Load the data into a NEDOLocation object
Step2: main_df adds the column names into the raw data and convert it to a pandas.DataFrame object
Convert the NEDO data into a more useful form
Add column names into the raw data does not make the dataframe easier to use. nedo_data_reader can "unstack" the data, i.e., making each row an entry of particular time.
Step3: Calculate the overall insolation
Step4: Extrat DNI
Daily METPV-11 data records the solar insolation on a horizontal plane. We can use get_DNI() to recalculate the DNI data.
Step5: Calculate DNI on a tilted surface
Step6: Visualize the sun irradiances in angular plot
Step7: Analyze angle of incidence | Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
from pypvcell.solarcell import SQCell,MJCell,TransparentCell
from pypvcell.illumination import Illumination
from pypvcell.spectrum import Spectrum
from pypvcell.metpv_reader import NEDOLocation
from pvlib.location import Location
from pvlib.tracking import SingleAxisTracker
from pvlib.irradiance import total_irrad,aoi_projection
nedo_solar_file='hm51106year.csv'
Explanation: This is the demonstration how to use NEDO data process utility
End of explanation
ngo_loc=NEDOLocation(nedo_solar_file)
df=ngo_loc.main_df
ngo_loc.main_df.head()
Explanation: Load the data into a NEDOLocation object
End of explanation
%%time
ngo_df=ngo_loc.extract_unstack_hour_data(norm=False)
ngo_df.head()
ngo_df.to_csv("ngo_df.csv")
Explanation: main_df adds the column names into the raw data and convert it to a pandas.DataFrame object
Convert the NEDO data into a more useful form
Add column names into the raw data does not make the dataframe easier to use. nedo_data_reader can "unstack" the data, i.e., making each row an entry of particular time.
End of explanation
ngo_df[['GHI','DHI','dHI']].sum()
Explanation: Calculate the overall insolation
End of explanation
ngo_dni=ngo_loc.get_DNI()
ngo_dni.head()
plt.plot(ngo_dni)
plt.ylim([0,1000])
Explanation: Extrat DNI
Daily METPV-11 data records the solar insolation on a horizontal plane. We can use get_DNI() to recalculate the DNI data.
End of explanation
ngo_tilt_irr=ngo_loc.tilt_irr(include_solar_pos=True)
ngo_tilt_irr.head()
ngo_tilt_irr.columns
plt.plot(ngo_tilt_irr['poa_direct'],alpha=0.5,label='incidence on tilt surface')
plt.plot(ngo_dni,alpha=0.5,label='DNI')
plt.ylim([0,1000])
plt.legend()
Explanation: Calculate DNI on a tilted surface
End of explanation
from matplotlib.colors import LogNorm
filtered_df=ngo_tilt_irr.loc[(ngo_tilt_irr['poa_direct']>1) & (ngo_tilt_irr['poa_direct']<500),
["azimuth","zenith",'poa_direct']]
ax = plt.subplot(111, projection='polar')
ax.plot(filtered_df['azimuth'].values*np.pi/180-np.pi/2,
filtered_df['zenith'].values-ngo_loc.latitude,'.')
plt.show()
import matplotlib as mpl
filtered_df=ngo_tilt_irr.loc[(ngo_tilt_irr['poa_direct']>1) & (ngo_tilt_irr['poa_direct']<500),
["azimuth","zenith",'poa_direct']]
ax = plt.subplot(111, projection='polar')
colormap = plt.get_cmap('hsv')
norm = mpl.colors.Normalize(1, 400)
cax=ax.scatter(filtered_df['azimuth'].values*np.pi/180-np.pi/2, filtered_df['zenith'].values-ngo_loc.latitude,
c=filtered_df['poa_direct'].values,s=200,norm=norm,alpha=0.5)
plt.colorbar(cax)
plt.savefig("nagoya_angular.png",dpi=600)
plt.show()
Explanation: Visualize the sun irradiances in angular plot
End of explanation
ngo_tilt_irr.columns
plt.hist(ngo_tilt_irr['aoi'],weights=ngo_tilt_irr['poa_direct'],bins=100)
plt.show()
Explanation: Analyze angle of incidence
End of explanation |
9,034 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
문자열을 인쇄하는 다양한 방법 활용
파이썬 2.x에서는 print 함수의 경우 인자들이 굳이 괄호 안에 들어 있어야 할 필요는 없다. 또한 여러 개의 값을 동시에 인쇄할 수도 있다. 이때 인자들은 콤마로 구분지어진다.
Step1: 주의
Step2: 하지만 위와 같이 괄호를 사용하지 않는 방식은 파이썬 3.x에서는 지원되지 않는다.
따라서 기본적으로 아래와 같이 사용하는 것을 추천한다.
Step3: 아래와 같이 할 수도 있다
Step4: 그런데 위 경우 a와 b를 인쇄할 때 스페이스가 자동으로 추가된다.
그런데 스페이스 없이 stringstring1으로 출력하려면 두 가지 방식이 있다.
문자열 덧셈 연산자를 활용한다.
서식이 있는 인쇄방식(formatted printing)을 사용한다.
Step5: 서식이 있는 인쇄방식을 이용하여 다양한 형태로 자료들을 인쇄할 수 있다. 서식을 사용하는 방식은 크게 두 가지가 있다.
% 를 이용하는 방식
C, Java 등에서 사용하는 방식과 거의 비슷하다.
format 키워드를 사용하는 방식
파이써 만의 새로운 방식이며 % 보다 좀 더 다양한 기능을 지원한다.
Java의 경우 MessageFormat 클래스의 format 메소드가 비슷한 기능을
지원하지만 사용법이 좀 더 복잡하다.
서식지정자(%)를 활용하여 문자열 인쇄하기
%s
Step6: 부동소수점 실수의 경우 여러 서식을 이용할 수 있다.
%f
Step7: 소수점 이하 숫자의 개수를 임의로 정할 수 있다.
Step8: pi값 계산을 소숫점 이하 50자리 정도까지 계산함을 알 수 있다. 이것은 사용하는 컴퓨터의 한계이며 컴퓨터의 성능에 따라 계산능력이 달라진다.
Step9: 여러 값을 보여주면 아래 예제처럼 기본은 왼쪽에 줄을 맞춘다.
Step10: 오른쪽으로 줄을 맞추려면 아래 방식을 사용한다.
숫자 12는 pi**10의 값인 93648.047476가 점(.)을 포함하여 총 12자리로 가장 길기에 선택되었다.
표현방식이 달라지면 다른 값을 선택해야 한다.
Step11: 비어 있는 자리를 숫자 0으로 채울 수도 있다.
Step12: 자릿수는 계산결과를 예상하여 결정해야 한다.
아래의 경우는 자릿수를 너무 작게 무시한다.
Step13: format 함수 사용하여 문자열 인쇄하기
format 함수를 이용하여 서식지정자(%)를 사용한 결과를 동일하게 구할 수 있다.
Step14: format 함수는 인덱싱 기능까지 지원한다.
Step15: 인덱싱을 위해 키워드를 사용할 수도 있다.
Step16: 인덱싱과 서식지정자를 함께 사용할 수 있다
Step17: % 연산자를 사용하는 방식과 format 함수를 사용하는 방식 중에 어떤 방식을 선택할지는 경우에 따라 다르다.
% 방식은 C, Java 등에서 일반적으로 지원되는 방식이다.
반면에 format 방식은 좀 더 다양한 활용법을 갖고 있다.
고급 기술
str 함수와 repr 함수에 대해 알아둘 필요가 있다.
str 함수
파이썬에서 사용하는 모든 객체는 __str__ 메소드를 갖고 있으며, 이 메소드는 해당 객체를 보여주는 데에 사용된다. 예를 들어 print 명령어를 사용하면 무조건 해당 객체의 __str__ 메소드가 호출되어 사용된다.
또한 str 함수가 있어서 동일한 역할을 수행한다.
Step18: str 함수는 문자열값을 리턴한다.
Step19: 앞서 언급한 대로 __str__ 메소드는 print 함수뿐만 아니라 서식을 사용할 때도 기본적으로 사용된다.
Step20: repr 함수
__str__ 와 더불어 __repr__ 또한 모든 파이썬 객체의 메소드로 존재한다.
__str__와 비슷한 기능을 수행하지만 __str__ 보다 정확한 값을 사용한다.
eval 함수와 사용하여 본래의 값을 되살리는 기능을 제공한다.
repr 함수를 사용하면 해당 객체의 __repr__ 메소드가 호출된다.
Step21: 하지만 pi1을 이용하여 원주율 pi 값을 되살릴 수는 없다.
Step22: repr 함수는 eval 함수를 이용하여 pi 값을 되살릴 수 있다.
Step23: repr 함수를 활용하는 서식지정자는 %r 이다. | Python Code:
a = "string"
b = "string1"
print a, b
print "The return value is", a
Explanation: 문자열을 인쇄하는 다양한 방법 활용
파이썬 2.x에서는 print 함수의 경우 인자들이 굳이 괄호 안에 들어 있어야 할 필요는 없다. 또한 여러 개의 값을 동시에 인쇄할 수도 있다. 이때 인자들은 콤마로 구분지어진다.
End of explanation
print(a, b)
print("The return value is", a)
Explanation: 주의: 아래와 같이 하면 모양이 기대와 다르게 나온다.
End of explanation
print(a+' '+b)
print("The return value is" + " " + a)
Explanation: 하지만 위와 같이 괄호를 사용하지 않는 방식은 파이썬 3.x에서는 지원되지 않는다.
따라서 기본적으로 아래와 같이 사용하는 것을 추천한다.
End of explanation
print(a),; print(b)
Explanation: 아래와 같이 할 수도 있다
End of explanation
print(a+b)
print("{}{}".format(a,b))
Explanation: 그런데 위 경우 a와 b를 인쇄할 때 스페이스가 자동으로 추가된다.
그런데 스페이스 없이 stringstring1으로 출력하려면 두 가지 방식이 있다.
문자열 덧셈 연산자를 활용한다.
서식이 있는 인쇄방식(formatted printing)을 사용한다.
End of explanation
print("%s%s%d" % (a, b, 10))
from math import pi
Explanation: 서식이 있는 인쇄방식을 이용하여 다양한 형태로 자료들을 인쇄할 수 있다. 서식을 사용하는 방식은 크게 두 가지가 있다.
% 를 이용하는 방식
C, Java 등에서 사용하는 방식과 거의 비슷하다.
format 키워드를 사용하는 방식
파이써 만의 새로운 방식이며 % 보다 좀 더 다양한 기능을 지원한다.
Java의 경우 MessageFormat 클래스의 format 메소드가 비슷한 기능을
지원하지만 사용법이 좀 더 복잡하다.
서식지정자(%)를 활용하여 문자열 인쇄하기
%s: 문자열 서식지정자
%d: int형 서식지정자
End of explanation
print("원주율값은 대략 %f이다." % pi)
print("원주율값은 대략 %e이다." % pi)
print("원주율값은 대략 %g이다." % pi)
Explanation: 부동소수점 실수의 경우 여러 서식을 이용할 수 있다.
%f: 소숫점 이하 6째 자리까지 보여준다. (7째 자리에서 반올림 한다.)
%e: 지수 형태로 보여준다.
%g: %f 또는 %e 방식 중에서 보다 간단한 서식으로 보여준다.
End of explanation
print("원주율값은 대략 %.10f이다." % pi)
print("원주율값은 대략 %.10e이다." % pi)
print("원주율값은 대략 %.10g이다." % pi)
Explanation: 소수점 이하 숫자의 개수를 임의로 정할 수 있다.
End of explanation
print("지금 사용하는 컴퓨터가 계산할 수 있는 원주율값은 대략 '%.50f'이다." % pi)
Explanation: pi값 계산을 소숫점 이하 50자리 정도까지 계산함을 알 수 있다. 이것은 사용하는 컴퓨터의 한계이며 컴퓨터의 성능에 따라 계산능력이 달라진다.
End of explanation
print("%f"% pi)
print("%f"% pi**3)
print("%f"% pi**10)
Explanation: 여러 값을 보여주면 아래 예제처럼 기본은 왼쪽에 줄을 맞춘다.
End of explanation
print("%12f"% pi)
print("%12f"% pi**3)
print("%12f"% pi**10)
print("%12e"% pi)
print("%12e"% pi**3)
print("%12e"% pi**10)
print("%12g"% pi)
print("%12g"% pi**3)
print("%12g"% pi**10)
print("%16.10f"% pi)
print("%16.10f"% pi**3)
print("%16.10f"% pi**10)
print("%16.10e"% pi)
print("%16.10e"% pi**3)
print("%16.10e"% pi**10)
print("%16.10g"% pi)
print("%16.10g"% pi**3)
print("%16.10g"% pi**10)
Explanation: 오른쪽으로 줄을 맞추려면 아래 방식을 사용한다.
숫자 12는 pi**10의 값인 93648.047476가 점(.)을 포함하여 총 12자리로 가장 길기에 선택되었다.
표현방식이 달라지면 다른 값을 선택해야 한다.
End of explanation
print("%012f"% pi)
print("%012f"% pi**3)
print("%012f"% pi**10)
print("%012e"% pi)
print("%012e"% pi**3)
print("%012e"% pi**10)
print("%012g"% pi)
print("%012g"% pi**3)
print("%012g"% pi**10)
print("%016.10f"% pi)
print("%016.10f"% pi**3)
print("%016.10f"% pi**10)
print("%016.10e"% pi)
print("%016.10e"% pi**3)
print("%016.10e"% pi**10)
print("%016.10g"% pi)
print("%016.10g"% pi**3)
print("%016.10g"% pi**10)
Explanation: 비어 있는 자리를 숫자 0으로 채울 수도 있다.
End of explanation
print("%12.20f" % pi**19)
Explanation: 자릿수는 계산결과를 예상하여 결정해야 한다.
아래의 경우는 자릿수를 너무 작게 무시한다.
End of explanation
print("{}{}{}".format(a, b, 10))
print("{:s}{:s}{:d}".format(a, b, 10))
print("{:f}".format(pi))
print("{:f}".format(pi**3))
print("{:f}".format(pi**10))
print("{:12f}".format(pi))
print("{:12f}".format(pi**3))
print("{:12f}".format(pi**10))
print("{:012f}".format(pi))
print("{:012f}".format(pi**3))
print("{:012f}".format(pi**10))
Explanation: format 함수 사용하여 문자열 인쇄하기
format 함수를 이용하여 서식지정자(%)를 사용한 결과를 동일하게 구할 수 있다.
End of explanation
print("{2}{1}{0}".format(a, b, 10))
Explanation: format 함수는 인덱싱 기능까지 지원한다.
End of explanation
print("{s1}{s2}{s1}".format(s1=a, s2=b, i1=10))
print("{i1}{s2}{s1}".format(s1=a, s2=b, i1=10))
Explanation: 인덱싱을 위해 키워드를 사용할 수도 있다.
End of explanation
print("{1:12f}, {0:12f}".format(pi, pi**3))
print("{p1:12f}, {p0:12f}".format(p0=pi, p1=pi**3))
Explanation: 인덱싱과 서식지정자를 함께 사용할 수 있다
End of explanation
a = 3.141592
print(a)
a.__str__()
str(a)
b = [2, 3.5, ['school', 'bus'], (1,2)]
str(b)
Explanation: % 연산자를 사용하는 방식과 format 함수를 사용하는 방식 중에 어떤 방식을 선택할지는 경우에 따라 다르다.
% 방식은 C, Java 등에서 일반적으로 지원되는 방식이다.
반면에 format 방식은 좀 더 다양한 활용법을 갖고 있다.
고급 기술
str 함수와 repr 함수에 대해 알아둘 필요가 있다.
str 함수
파이썬에서 사용하는 모든 객체는 __str__ 메소드를 갖고 있으며, 이 메소드는 해당 객체를 보여주는 데에 사용된다. 예를 들어 print 명령어를 사용하면 무조건 해당 객체의 __str__ 메소드가 호출되어 사용된다.
또한 str 함수가 있어서 동일한 역할을 수행한다.
End of explanation
type(str(b))
Explanation: str 함수는 문자열값을 리턴한다.
End of explanation
print(b)
"%s" % b
"{}".format(b)
Explanation: 앞서 언급한 대로 __str__ 메소드는 print 함수뿐만 아니라 서식을 사용할 때도 기본적으로 사용된다.
End of explanation
c = str(pi)
pi1 = eval(c)
pi1
Explanation: repr 함수
__str__ 와 더불어 __repr__ 또한 모든 파이썬 객체의 메소드로 존재한다.
__str__와 비슷한 기능을 수행하지만 __str__ 보다 정확한 값을 사용한다.
eval 함수와 사용하여 본래의 값을 되살리는 기능을 제공한다.
repr 함수를 사용하면 해당 객체의 __repr__ 메소드가 호출된다.
End of explanation
pi1 - pi
Explanation: 하지만 pi1을 이용하여 원주율 pi 값을 되살릴 수는 없다.
End of explanation
pi2 = repr(pi)
type(pi2)
eval(pi2) - pi
Explanation: repr 함수는 eval 함수를 이용하여 pi 값을 되살릴 수 있다.
End of explanation
"%s" % pi
"%r" % pi
"%d" % pi
Explanation: repr 함수를 활용하는 서식지정자는 %r 이다.
End of explanation |
9,035 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<br>
Weighted kernel density estimation to quickly reproduce the profile of a diffractometer
<br>
<br>
This example shows a work-arround for a quick visualization of a diffractorgram (similar to experimental powder diffractograms) from ImageD11 ".flt" or ".new" columnfile containing peaks information.
It is basically a probability density function (pdf) of the $2\theta$ position of the peak, which is weighted by the peak intensity.
<br>The smoothing of such gaussian kde is decided by the bandwidht value.
Weighted kde
Step1: Loading and visualizing the input data
Step2: Plotting the diffraction profile
Step3: The profile showed above is highly smoothed and the hkl peaks are merged.<br>
$\to$ A Smaller bandwidth should be used.
Choosing the right bandwidth of the estimator
The bandwidth can be passed as argument to the gaussian_kde() object or set afterward using the later set_badwidth() method. For example, the bandwidth can be reduced by a factor of 100 with respect to its previous value | Python Code:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from ImageD11.columnfile import columnfile
from ImageD11 import weighted_kde as wkde
%matplotlib inline
plt.rcParams['figure.figsize'] = (6,4)
plt.rcParams['figure.dpi'] = 150
plt.rcParams['mathtext.fontset'] = 'cm'
plt.rcParams['font.size'] = 12
Explanation: <br>
Weighted kernel density estimation to quickly reproduce the profile of a diffractometer
<br>
<br>
This example shows a work-arround for a quick visualization of a diffractorgram (similar to experimental powder diffractograms) from ImageD11 ".flt" or ".new" columnfile containing peaks information.
It is basically a probability density function (pdf) of the $2\theta$ position of the peak, which is weighted by the peak intensity.
<br>The smoothing of such gaussian kde is decided by the bandwidht value.
Weighted kde : The original Scipy gaussian kde was modified by Till Hoffmann to allow for heterogeneous sampling weights.
<br>
End of explanation
# read the peaks
flt = columnfile('sma_261N.flt.new')
# peaks indexed to phase 1
phase1 = flt.copy()
phase1.filter( phase1.labels > -1 )
# unindexed peaks (phase 2 + unindexed phase 1?)
phase2 = flt.copy()
phase2.filter( phase2.labels == -1 )
#plot radial transform for phase 1
plt.plot( phase1.tth_per_grain, phase1.eta_per_grain, 'x')
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ \eta \, (\degree) $' )
plt.title( r'$Diffraction \, angles$' )
Explanation: Loading and visualizing the input data
End of explanation
# Probability density function (pdf) of 2theta
# weighted by the peak intensity and using default 2theta bandwidth
I_phase1 = phase1.sum_intensity * phase1.Lorentz_per_grain
pdf = wkde.gaussian_kde( phase1.tth_per_grain, weights = I_phase1)
# Plotting it over 2theta range
x = np.linspace( min(flt.tth), max(flt.tth), 500 )
y = pdf(x)
plt.plot(x, y)
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ I $' )
plt.yticks([])
plt.title( ' With bandwidth = %.3f'%pdf.factor )
Explanation: Plotting the diffraction profile
End of explanation
pdf_phase1 = wkde.gaussian_kde( phase1.tth, weights = phase1.sum_intensity )
pdf_phase2 = wkde.gaussian_kde( phase2.tth, weights = phase2.sum_intensity )
frac_phase1 = np.sum( phase1.sum_intensity ) / np.sum( flt.sum_intensity )
frac_phase2 = np.sum( phase2.sum_intensity ) / np.sum( flt.sum_intensity )
from ipywidgets import interact
bw_range = ( 0.001, pdf_phase1.factor/3, 0.001)
@interact( bandwidth = bw_range)
def plot_pdf(bandwidth):
pdf_phase1.set_bandwidth(bandwidth)
pdf_phase2.set_bandwidth(bandwidth)
y_phase1 = pdf_phase1(x)
y_phase2 = pdf_phase2(x)
plt.plot( x, frac_phase1 * y_phase1, label = r'$Phase \, 1$' )
plt.plot( x, frac_phase2 * y_phase2, label = r'$Phase \, 2$' )
plt.legend(loc='best')
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ I $' )
plt.yticks([])
plt.title( r'$ 3DXRD \, diffractogram $' )
Explanation: The profile showed above is highly smoothed and the hkl peaks are merged.<br>
$\to$ A Smaller bandwidth should be used.
Choosing the right bandwidth of the estimator
The bandwidth can be passed as argument to the gaussian_kde() object or set afterward using the later set_badwidth() method. For example, the bandwidth can be reduced by a factor of 100 with respect to its previous value:
Python
gaussian_kde().set_bandwidth( gaussian_kde().factor / 100 )
End of explanation |
9,036 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to the jupyter notebook! To run any cell, press Shit+Enter or Ctrl+Enter.
IMPORTANT
Step1: Notebook Basics
A cell contains any type of python inputs (expression, function definitions, etc...). Running a cell is equivalent to input this block in the python interpreter. The notebook will print the output of the last executed line.
Step2: Numpy Basics
IMPORTANT
Step3: Creation of arrays
Creating ndarrays (np.zeros, np.ones) is done by giving the shape as an iterable (List or Tuple). An integer is also accepted for one-dimensional array.
np.eye creates an identity matrix.
You can also create an array by giving iterables to it.
(NB
Step4: ndarray basics
A ndarray python object is just a reference to the data location and its characteristics.
All numpy operations applying on an array can be called np.function(a) or a.function() (i.e np.sum(a) or a.sum())
It has an attribute shape that returns a tuple of the different dimensions of the ndarray. It also has an attribute dtype that describes the type of data of the object (default type is float64)
WARNING because of the object structure, unless you call copy() copying the reference is not copying the data.
Step5: Basic operators are working element-wise (+, -, *, /)
When trying to apply operators for arrays with different sizes, they are very specific rules that you might want to understand in the future
Step6: Accessing elements and slicing
For people uncomfortable with the slicing of arrays, please have a look at the 'Indexing and Slicing' section of http
Step7: Changing the shape of arrays
ravel creates a flattened view of an array (1-D representation) whereas flatten creates flattened copy of the array.
reshape allows in-place modification of the shape of the data. transpose shuffles the dimensions.
np.newaxis allows the creation of empty dimensions.
Step8: Reduction operations
Reduction operations (np.sum, np.max, np.min, np.std) work on the flattened ndarray by default. You can specify the reduction axis as an argument
Step9: Linear-algebra operations
Step10: Grouping operations
Grouping operations (np.stack, np.hstack, np.vstack, np.concatenate) take an iterable of ndarrays and not ndarrays as separate arguments
Step11: Working on subset of the elements
We have two ways in order to apply operations on subparts of arrays (besides slicing).
Slicing reminders
Step12: Binary masks
Using logical operations on arrays give a binary mask. Using a binary mask as indexing acts as a filter and outputs just the very elements where the value is True. This gives a memoryview of the array that can get modified.
Step13: Working with indices
The second way to work on subpart of arrays are through indices. Usually you'd use one array per dimension with matching indices.
WARNING
Step14: Working with arrays, examples
Thanks to all these tools, you should be able to avoid writing almost any for-loops which are extremely costly in Python (even more than in Matlab, because good JIT engines are yet to come). In case you really need for-loops for array computation (usually not needed but it happens) have a look at http
Step15: Compute polynomial for a lot of values
Step16: Scipy
Scipy is a collection of libraries more specialized than Numpy. It is the equivalent of toolboxes in Matlab.
Have a look at their collection | Python Code:
# Useful starting lines
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
%load_ext autoreload
%autoreload 2
Explanation: Welcome to the jupyter notebook! To run any cell, press Shit+Enter or Ctrl+Enter.
IMPORTANT : Please have a look at Help->User Interface Tour and Help->Keyboard Shortcuts in the toolbar above that will help you get started.
End of explanation
1
x = [2,3,4]
def my_function(l):
l.append(12)
my_function(x)
x
# Matplotlib is used for plotting, plots are directly embedded in the
# notebook thanks to the '%matplolib inline' command at the beginning
plt.hist(np.random.randn(10000), bins=40)
plt.xlabel('X label')
plt.ylabel('Y label')
Explanation: Notebook Basics
A cell contains any type of python inputs (expression, function definitions, etc...). Running a cell is equivalent to input this block in the python interpreter. The notebook will print the output of the last executed line.
End of explanation
np.multiply
Explanation: Numpy Basics
IMPORTANT : the numpy documentation is quite good. The Notebook system is really good to help you. Use the Auto-Completion with Tab, and use Shift+Tab to get the complete documentation about the current function (when the cursor is between the parenthesis of the function for instance).
For example, you want to multiply two arrays. np.mul + Tab complete to the only valid function np.multiply. Then using Shift+Tab you learn np.multiply is actually the element-wise multiplication and is equivalent to the * operator.
End of explanation
np.zeros(4)
np.eye(3)
np.array([[1,3,4],[2,5,6]])
np.arange(10) # NB : np.array(range(10)) is a slightly more complicated equivalent
np.random.randn(3, 4) # normal distributed values
# 3-D tensor
tensor_3 = np.ones((2, 4, 2))
tensor_3
Explanation: Creation of arrays
Creating ndarrays (np.zeros, np.ones) is done by giving the shape as an iterable (List or Tuple). An integer is also accepted for one-dimensional array.
np.eye creates an identity matrix.
You can also create an array by giving iterables to it.
(NB : The random functions np.random.rand and np.random.randn are exceptions though)
End of explanation
tensor_3.shape, tensor_3.dtype
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = np.array([[4, 3], [2, 1]])
(b.dtype, a.dtype) # each array has a data type (casting rules apply for int -> float)
np.array(["Mickey", "Mouse"]) # can hold more than just numbers
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = a # Copying the reference only
b[0,0] = 3
a
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = a.copy() # Deep-copy of the data
b[0,0] = 3
a
Explanation: ndarray basics
A ndarray python object is just a reference to the data location and its characteristics.
All numpy operations applying on an array can be called np.function(a) or a.function() (i.e np.sum(a) or a.sum())
It has an attribute shape that returns a tuple of the different dimensions of the ndarray. It also has an attribute dtype that describes the type of data of the object (default type is float64)
WARNING because of the object structure, unless you call copy() copying the reference is not copying the data.
End of explanation
np.ones((2, 4)) * np.random.randn(2, 4)
np.eye(3) - np.ones((3,3))
print(a)
print(a.shape) # Get shape
print(a.shape[0]) # Get size of first dimension
Explanation: Basic operators are working element-wise (+, -, *, /)
When trying to apply operators for arrays with different sizes, they are very specific rules that you might want to understand in the future : http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
End of explanation
print(a[0]) # Get first line (slice for the first dimension)
print(a[:, 1]) # Get second column (slice for the second dimension)
print(a[0, 1]) # Get first line second column element
Explanation: Accessing elements and slicing
For people uncomfortable with the slicing of arrays, please have a look at the 'Indexing and Slicing' section of http://www.python-course.eu/numpy.php
End of explanation
a = np.array([[1.0, 2.0], [5.0, 4.0]])
b = np.array([[4, 3], [2, 1]])
v = np.array([0.5, 2.0])
print(a)
print(a.T) # Equivalent : a.tranpose(), np.transpose(a)
print(a.ravel())
c = np.random.randn(4,5)
print(c.shape)
print(c[np.newaxis].shape) # Adding a dimension
print(c.T.shape)
print(c.reshape([10,2]).shape)
print(c)
print(c.reshape([10,2]))
a.reshape((-1, 1)) # a[-1] means 'whatever needs to go there'
Explanation: Changing the shape of arrays
ravel creates a flattened view of an array (1-D representation) whereas flatten creates flattened copy of the array.
reshape allows in-place modification of the shape of the data. transpose shuffles the dimensions.
np.newaxis allows the creation of empty dimensions.
End of explanation
np.sum(a), np.sum(a, axis=0), np.sum(a, axis=1) # reduce-operations reduce the whole array if no axis is specified
Explanation: Reduction operations
Reduction operations (np.sum, np.max, np.min, np.std) work on the flattened ndarray by default. You can specify the reduction axis as an argument
End of explanation
np.dot(a, b) # matrix multiplication
# Other ways of writing matrix multiplication, the '@' operator for matrix multiplication
# was introduced in Python 3.5
np.allclose(a.dot(b), a @ b)
# For other linear algebra operations, use the np.linalg module
np.linalg.eig(a) # Eigen-decomposition
print(np.linalg.inv(a)) # Inverse
np.allclose(np.linalg.inv(a) @ a, np.identity(a.shape[1])) # a^-1 * a = Id
np.linalg.solve(a, v) # solves ax = v
Explanation: Linear-algebra operations
End of explanation
np.hstack([a, b])
np.vstack([a, b])
np.vstack([a, b]) + v # broadcasting
np.hstack([a, b]) + v # does not work
np.hstack([a, b]) + v.T # transposing a 1-D array achieves nothing
np.hstack([a, b]) + v.reshape((-1, 1)) # reshaping to convert v from a (2,) vector to a (2,1) matrix
np.hstack([a, b]) + v[:, np.newaxis] # equivalently, we can add an axis
Explanation: Grouping operations
Grouping operations (np.stack, np.hstack, np.vstack, np.concatenate) take an iterable of ndarrays and not ndarrays as separate arguments : np.concatenate([a,b]) and not np.concatenate(a,b).
End of explanation
r = np.random.random_integers(0, 9, size=(3, 4))
r
r[0], r[1]
r[0:2]
r[1][2] # regular python
r[1, 2] # numpy
r[:, 1:3]
Explanation: Working on subset of the elements
We have two ways in order to apply operations on subparts of arrays (besides slicing).
Slicing reminders
End of explanation
r > 5 # Binary element-wise result
r[r > 5] # Use the binary mask as filter
r[r > 5] = 999 # Modify the corresponding values with a constant
r
Explanation: Binary masks
Using logical operations on arrays give a binary mask. Using a binary mask as indexing acts as a filter and outputs just the very elements where the value is True. This gives a memoryview of the array that can get modified.
End of explanation
# Get the indices where the condition is true, gives a tuple whose length
# is the number of dimensions of the input array
np.where(r == 999)
print(np.where(np.arange(10) < 5)) # Is a 1-tuple
np.where(np.arange(10) < 5)[0] # Accessing the first element gives the indices array
np.where(r == 999, -10, r+1000) # Ternary condition, if True take element from first array, otherwise from second
r[(np.array([1,2]), np.array([2,2]))] # Gets the view corresponding to the indices. NB : iterable of arrays as indexing
Explanation: Working with indices
The second way to work on subpart of arrays are through indices. Usually you'd use one array per dimension with matching indices.
WARNING : indices are usually slower than binary masks because it is harder to be parallelized by the underlying BLAS library.
End of explanation
numbers = np.random.randn(1000, 1000)
%%timeit # Naive version
my_sum = 0
for n in numbers.ravel():
if n>0:
my_sum += n
%timeit np.sum(numbers > 0)
Explanation: Working with arrays, examples
Thanks to all these tools, you should be able to avoid writing almost any for-loops which are extremely costly in Python (even more than in Matlab, because good JIT engines are yet to come). In case you really need for-loops for array computation (usually not needed but it happens) have a look at http://numba.pydata.org/ (For advanced users)
Counting the number of positive elements that satisfy a condition
End of explanation
X = np.random.randn(10000)
%%timeit # Naive version
my_result = np.zeros(len(X))
for i, x in enumerate(X.ravel()):
my_result[i] = 1 + x + x**2 + x**3 + x**4
%timeit 1 + X + X**2 + X**3 + X**4
Explanation: Compute polynomial for a lot of values
End of explanation
X = np.random.randn(1000)
from scipy.fftpack import fft
plt.plot(fft(X).real)
Explanation: Scipy
Scipy is a collection of libraries more specialized than Numpy. It is the equivalent of toolboxes in Matlab.
Have a look at their collection : http://docs.scipy.org/doc/scipy-0.18.0/reference/
Many traditionnal functions are coded there.
End of explanation |
9,037 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Customer migration from Prestashop to Woocommerce part 1
Step1: Loading the data from Prestashop backend and sql. Note that Prestashop backend can generate some table for you by a default function. We copy and paste into excel (the file will have .xlsx extension). For our comfortable, we use this table as a starter then merge with the other missing information from sql.
Step2: Select only interest column.
Step3: Sort id_customer ascending.
Step4: In one customer may has many id_address. This depend on how many times they change the address on frontend. We create "link_id_and_address" dataframe to link "id_customer" and "id_address" in order to find the unique id_customer with max id_address first (max id is the lastest address available).
Step5: We group "link_id_and_address" by "id_customer", so we have the unique "id_customer".
Step6: Then we merge all dataframe into "information" dataframe
Step7: Change name of column in order to merge.
Step8: Prestashop stores full name country. We must change to iso country format to use in Woocommerce. We view how many country first.
Step9: Load iso country reference in to a dic.
Step10: If country column has nan value, it will cause an error. We view the country column first. Are there any nan value? Then temporary fill it with '-'.
Step11: Changing some country names don't match with name in country dic.
Step12: Changing to iso country code. Fill '-' back with nan value. Then check for nan value again. The number of nan value must equal to the previous check.
Step13: In new Woocommerce store, we have 3 groups of user. That is 18+, vendor and customer. The old Prestashop customers we have 7 group. The groups we want to keep are group 6 (vendors), 4 (18+) and the rest is 3 (customers).
For our comfortable running the for loop, we change the max group number (7) to 0.
Step14: Now the maximum group number is 6. We group customer by "id_customer" by max group number.
Step15: Still have group number lower than 3. We set them to be 3.
Step16: Now we only have customer in 3 groups
1) Group 6 Vendors
2) Group 4 18+
3) Group 3 Customers
Then we merge "group" to infromation dataframe.
Step17: Check for nan value in "id_group" and fill it with '3'. | Python Code:
import pandas as pd
import numpy as np
import csv
Explanation: Customer migration from Prestashop to Woocommerce part 1 : Gathering raw customer information from Prestashop
This is the product migration of the book store from Prestashop to Woocommerce format. Unlike the product, users (customers in Woocommerce are called users) don't have an import function. We must import the data via sql. Woocommerce users database structure consists of 2 sql table "wp_users" and "wp_usermeta".
1) "wp_users" is about the name, username and password of the users
2) "wp_usermeta" is about the detail of users such as billing address, group of users etc.
Our objective here is to collect the data from Prestashop, manipulate them in the form of Woocommerce.
End of explanation
#Read data from excel
#data from prestashop backend
customer_from_backend = pd.read_excel('sql_prestashop/customer_detail.xlsx')
address_from_backend = pd.read_excel('sql_prestashop/customer_address.xlsx')
#data from mysql database
ps_address = pd.read_csv('sql_prestashop/ps_address.csv', index_col=False)
ps_customer_group = pd.read_csv('sql_prestashop/ps_customer_group.csv')
Explanation: Loading the data from Prestashop backend and sql. Note that Prestashop backend can generate some table for you by a default function. We copy and paste into excel (the file will have .xlsx extension). For our comfortable, we use this table as a starter then merge with the other missing information from sql.
End of explanation
#Select only use column
ps_address = ps_address[['id_address', 'id_customer', 'other', 'phone', 'phone_mobile', 'vat_number', 'date_add', 'date_upd', 'active', 'deleted']]
address_from_backend = address_from_backend.drop(['firstname','lastname'], axis=1)
Explanation: Select only interest column.
End of explanation
#customer_from_backend is not ascending
customer_from_backend = customer_from_backend.sort_values('id_customer')
Explanation: Sort id_customer ascending.
End of explanation
link_id_and_address = pd.DataFrame()
link_id_and_address['id_customer'] = ps_address['id_customer']
link_id_and_address['id_address'] = ps_address['id_address']
#sort "id_customer".
link_id_and_address = link_id_and_address.sort_values('id_customer')
Explanation: In one customer may has many id_address. This depend on how many times they change the address on frontend. We create "link_id_and_address" dataframe to link "id_customer" and "id_address" in order to find the unique id_customer with max id_address first (max id is the lastest address available).
End of explanation
#Assume max id_address to be a lastest address update, so we use groupby to
#find lastest id_address
link_id_and_address = link_id_and_address.groupby('id_customer', as_index=False).max()
Explanation: We group "link_id_and_address" by "id_customer", so we have the unique "id_customer".
End of explanation
#merge
information = pd.merge(link_id_and_address, customer_from_backend, how='left', on='id_customer')
information = pd.merge(information, address_from_backend, how='left', on='id_address')
information = pd.merge(information, ps_address, how='left', on='id_address')
Explanation: Then we merge all dataframe into "information" dataframe
End of explanation
#Merge column with the same name will generate _x or _y follow the column name
#So rename it first for further use.
information = information.rename(columns = {'id_customer_x':'id_customer'})
Explanation: Change name of column in order to merge.
End of explanation
#Changing country in to woocommerce format
count_country = information['country'].value_counts()
Explanation: Prestashop stores full name country. We must change to iso country format to use in Woocommerce. We view how many country first.
End of explanation
#Load iso country reference data from Wikipedia
dic = {}
with open("sql_prestashop/wikipedia-iso-country-codes.csv") as f:
file= csv.DictReader(f, delimiter=',')
for line in file:
dic[line['English short name lower case']] = line['Alpha-2 code']
Explanation: Load iso country reference in to a dic.
End of explanation
#country column has nan value. change it to other country, avoid the error.
#count null country
information['country'].isnull().sum()
#temporary fill it with '-'
information['country'] = information['country'].fillna('-')
Explanation: If country column has nan value, it will cause an error. We view the country column first. Are there any nan value? Then temporary fill it with '-'.
End of explanation
#Add dic for '-'
dic['-'] = '-'
#Changing some dics that don't match the iso code
dic['Taiwan'] = dic['Taiwan, Province of China']
dic['South Korea'] = dic["Korea, Republic of"]
dic['HongKong'] = dic['Hong Kong']
dic['Vietnam'] = dic['Viet Nam']
dic['Korea, Dem. Republic of'] = dic["Korea, Democratic People's Republic of"]
dic['Macau'] = dic["Macao"]
dic['Brunei'] = dic["Brunei Darussalam"]
Explanation: Changing some country names don't match with name in country dic.
End of explanation
#Change to iso code
information['country'] = [dic[x] for x in information['country']]
#Replace '-' back to ba nan value
information['country'] = information['country'].replace('-',np.NaN)
#Check number of nan value again
information['country'].isnull().sum()
Explanation: Changing to iso country code. Fill '-' back with nan value. Then check for nan value again. The number of nan value must equal to the previous check.
End of explanation
#Clean data for ps_customer group
#Group 6 is first priority then 4
#First change group 7 to 0 in order to make group 6 is the max value
for x in range(ps_customer_group.shape[0]):
if ps_customer_group['id_group'].iloc[x] == 7:
ps_customer_group['id_group'].iloc[x] = 0
Explanation: In new Woocommerce store, we have 3 groups of user. That is 18+, vendor and customer. The old Prestashop customers we have 7 group. The groups we want to keep are group 6 (vendors), 4 (18+) and the rest is 3 (customers).
For our comfortable running the for loop, we change the max group number (7) to 0.
End of explanation
#Find max value of group id
group = ps_customer_group.groupby('id_customer', as_index=False).max()
Explanation: Now the maximum group number is 6. We group customer by "id_customer" by max group number.
End of explanation
#Still have group 2. Change it to group 3
for x in range(group.shape[0]):
if group['id_group'].iloc[x] == 2:
group['id_group'].iloc[x] = 3
Explanation: Still have group number lower than 3. We set them to be 3.
End of explanation
#merge again
information = pd.merge(information, group,\
how='left', on='id_customer')
Explanation: Now we only have customer in 3 groups
1) Group 6 Vendors
2) Group 4 18+
3) Group 3 Customers
Then we merge "group" to infromation dataframe.
End of explanation
#Check for nan value
information['id_group'].isnull().sum()
#Fill nan with 3
information['id_group'] = information['id_group'].fillna(3)
#save to .csv
information.to_csv('customer_import_to_woo/raw_information.csv', encoding='utf-8')
Explanation: Check for nan value in "id_group" and fill it with '3'.
End of explanation |
9,038 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2H
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
9,039 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Notebook 2
Our current dataset suffers from duplicate tweet's from bots, hacked accounts etc.
As such, this notebook will show you how to deal with these duplicates in a manner that does not take $O(N^{2})$.
Traditionally, the way to search for duplicates would be to wrte some code with nested for loops that passess over the list twice, comparing one entry to every other entry in the list. While this approach would work - it is incredibly slow for a list with a large number of elements. In our case, our list is just over 1 million elements long.
Furthermore, this approach only really works when you are looking for EXACT duplicates. This will not do for our case as bots will re-tweet the original tweet with either a different URL link or slightly different formatting. Thus, while a human would be able to tell that the tweet is practically identical, a computer would not.
One possible solution to this to use a form of fuzzy matching, that is, if two strings are similar enough to pass a threshold of, say, 0.9 (levenstein distance), then we can assume they are duplicate tweets. While this is a fantastic approach, it is still $O(N^{2})$.
In this notebook, I present a different, more nuanced approach!
Step1: We see from the above code, that I have removed duplicates by creating a tuple set of the words that are in the tweet after having removed the URL's, punctuation etc.
This way we get to utilise the power of pandas to drop rows that contain duplicate tweets.
It is important to note that this is NOT a fool proof way to drop ALL duplciates. However, I am confident that I will drop the majority!
Step2: Note | Python Code:
import pandas as pd
import arrow # way better than datetime
import numpy as np
import random
import re
%run helper_functions.py
import string
new_df = unpickle_object("new_df.pkl") # this loads up the dataframe from our previous notebook
new_df.head() #sorted first on date and then time!
new_df.iloc[0, 3]
#we need to remove all links in a tweet!
regex = r"http\S+"
subset = ""
removed_links = list(map(lambda x: re.sub(regex, subset, x), list(new_df['tweet'])))
removed_links = list(map(str.strip, removed_links))
new_df['tweet'] = removed_links
new_df.iloc[0, 3] # we can see here that the link has been removed!
new_df.iloc[1047748, [1, 3]] #example of duplicate enttry - different handles, same tweets
new_df.iloc[1047749, [1, 3]]
#this illustrates only one example of duplicates in the data!
duplicate_indicies = []
for index, value in enumerate(new_df.index):
if "Multiplayer #Poker" in new_df.iloc[value, 3]:
duplicate_indicies.append(index)
new_df.iloc[duplicate_indicies, [1,3]]
tweet_list = list(new_df['tweet']) #lets first make a list of the tweets we need to remove duplicates from
string.punctuation
remove_punctuaton = '!"$%&\'()*+,-./:;<=>?@[\\]“”^_`{|}~' # same as string.punctuation, but without # - I want hashtags!
set_list = []
clean_tweet_list = []
translator = str.maketrans('', '', remove_punctuaton) #very fast punctuation remover!
for word in tweet_list:
list_form = word.split() #turns the word into a list
to_process = [x for x in list_form if not x.startswith("@")] #removes handles
to_process_2 = [x for x in to_process if not x.startswith("RT")] #removed retweet indicator
string_form = " ".join(to_process_2) #back into a string
set_form = set(string_form.translate(translator).strip().lower().split()) #this is the magic!
clean_tweet_list.append(string_form.translate(translator).strip().lower())
set_list.append(tuple(set_form)) #need to make it a tuple so it's hashable!
new_df['tuple_version_tweet'] = set_list
new_df['clean_tweet_V1'] = clean_tweet_list
new_df.head()
new_df.iloc[1047748, 4] # we have extracted the core text from the tweets! YAY!
new_df.iloc[1047749, 4]
new_df.iloc[1047748, 4] == new_df.iloc[1047748, 4] #this is perfect!
new_df.shape #dimensions before duplicate removal!
test_df = new_df.drop_duplicates(subset='tuple_version_tweet', keep="first") #keep the first occurence
#otherwise drop rows that have matching tuples!
Explanation: Notebook 2
Our current dataset suffers from duplicate tweet's from bots, hacked accounts etc.
As such, this notebook will show you how to deal with these duplicates in a manner that does not take $O(N^{2})$.
Traditionally, the way to search for duplicates would be to wrte some code with nested for loops that passess over the list twice, comparing one entry to every other entry in the list. While this approach would work - it is incredibly slow for a list with a large number of elements. In our case, our list is just over 1 million elements long.
Furthermore, this approach only really works when you are looking for EXACT duplicates. This will not do for our case as bots will re-tweet the original tweet with either a different URL link or slightly different formatting. Thus, while a human would be able to tell that the tweet is practically identical, a computer would not.
One possible solution to this to use a form of fuzzy matching, that is, if two strings are similar enough to pass a threshold of, say, 0.9 (levenstein distance), then we can assume they are duplicate tweets. While this is a fantastic approach, it is still $O(N^{2})$.
In this notebook, I present a different, more nuanced approach!
End of explanation
#lets use the example from before! - it only occurs once now!
for index, value in enumerate(test_df.iloc[:, 3]):
if "Multiplayer #Poker" in value:
print(test_df.iloc[index, [1,3]])
new_df.shape
test_df.shape
((612644-1049878)/1049878)*100 #41% reduction!
Explanation: We see from the above code, that I have removed duplicates by creating a tuple set of the words that are in the tweet after having removed the URL's, punctuation etc.
This way we get to utilise the power of pandas to drop rows that contain duplicate tweets.
It is important to note that this is NOT a fool proof way to drop ALL duplciates. However, I am confident that I will drop the majority!
End of explanation
pickle_object(test_df, "no_duplicates_df")
Explanation: Note: I added a column called clean_tweet_V1. These are the tweets stripped of punctuation. These will be very useful for our NLP process later on when it comes to lemmatization.
End of explanation |
9,040 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Learning to Resize in Computer Vision
Author
Step1: Define hyperparameters
In order to facilitate mini-batch learning, we need to have a fixed shape for the images
inside a given batch. This is why an initial resizing is required. We first resize all
the images to (300 x 300) shape and then learn their optimal representation for the
(150 x 150) resolution.
Step2: In this example, we will use the bilinear interpolation but the learnable image resizer
module is not dependent on any specific interpolation method. We can also use others,
such as bicubic.
Load and prepare the dataset
For this example, we will only use 40% of the total training dataset.
Step3: Define the learnable resizer utilities
The figure below (courtesy
Step4: Visualize the outputs of the learnable resizing module
Here, we visualize how the resized images would look like after being passed through the
random weights of the resizer.
Step5: Model building utility
Step6: The structure of the learnable image resizer module allows for flexible integrations with
different vision models.
Compile and train our model with learnable resizer
Step7: Visualize the outputs of the trained visualizer | Python Code:
from tensorflow.keras import layers
from tensorflow import keras
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import matplotlib.pyplot as plt
import numpy as np
Explanation: Learning to Resize in Computer Vision
Author: Sayak Paul<br>
Date created: 2021/04/30<br>
Last modified: 2021/05/13<br>
Description: How to optimally learn representations of images for a given resolution.
It is a common belief that if we constrain vision models to perceive things as humans do,
their performance can be improved. For example, in this work,
Geirhos et al. showed that the vision models pre-trained on the ImageNet-1k dataset are
biased toward texture whereas human beings mostly use the shape descriptor to develop a
common perception. But does this belief always apply especially when it comes to improving
the performance of vision models?
It turns out it may not always be the case. When training vision models, it is common to
resize images to a lower dimension ((224 x 224), (299 x 299), etc.) to allow mini-batch
learning and also to keep up the compute limitations. We generally make use of image
resizing methods like bilinear interpolation for this step and the resized images do
not lose much of their perceptual character to the human eyes. In
Learning to Resize Images for Computer Vision Tasks, Talebi et al. show
that if we try to optimize the perceptual quality of the images for the vision models
rather than the human eyes, their performance can further be improved. They investigate
the following question:
For a given image resolution and a model, how to best resize the given images?
As shown in the paper, this idea helps to consistently improve the performance of the
common vision models (pre-trained on ImageNet-1k) like DenseNet-121, ResNet-50,
MobileNetV2, and EfficientNets. In this example, we will implement the learnable image
resizing module as proposed in the paper and demonstrate that on the
Cats and Dogs dataset
using the DenseNet-121 architecture.
This example requires TensorFlow 2.4 or higher.
Setup
End of explanation
INP_SIZE = (300, 300)
TARGET_SIZE = (150, 150)
INTERPOLATION = "bilinear"
AUTO = tf.data.AUTOTUNE
BATCH_SIZE = 64
EPOCHS = 5
Explanation: Define hyperparameters
In order to facilitate mini-batch learning, we need to have a fixed shape for the images
inside a given batch. This is why an initial resizing is required. We first resize all
the images to (300 x 300) shape and then learn their optimal representation for the
(150 x 150) resolution.
End of explanation
train_ds, validation_ds = tfds.load(
"cats_vs_dogs",
# Reserve 10% for validation
split=["train[:40%]", "train[40%:50%]"],
as_supervised=True,
)
def preprocess_dataset(image, label):
image = tf.image.resize(image, (INP_SIZE[0], INP_SIZE[1]))
label = tf.one_hot(label, depth=2)
return (image, label)
train_ds = (
train_ds.shuffle(BATCH_SIZE * 100)
.map(preprocess_dataset, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
validation_ds = (
validation_ds.map(preprocess_dataset, num_parallel_calls=AUTO)
.batch(BATCH_SIZE)
.prefetch(AUTO)
)
Explanation: In this example, we will use the bilinear interpolation but the learnable image resizer
module is not dependent on any specific interpolation method. We can also use others,
such as bicubic.
Load and prepare the dataset
For this example, we will only use 40% of the total training dataset.
End of explanation
def conv_block(x, filters, kernel_size, strides, activation=layers.LeakyReLU(0.2)):
x = layers.Conv2D(filters, kernel_size, strides, padding="same", use_bias=False)(x)
x = layers.BatchNormalization()(x)
if activation:
x = activation(x)
return x
def res_block(x):
inputs = x
x = conv_block(x, 16, 3, 1)
x = conv_block(x, 16, 3, 1, activation=None)
return layers.Add()([inputs, x])
def get_learnable_resizer(filters=16, num_res_blocks=1, interpolation=INTERPOLATION):
inputs = layers.Input(shape=[None, None, 3])
# First, perform naive resizing.
naive_resize = layers.Resizing(
*TARGET_SIZE, interpolation=interpolation
)(inputs)
# First convolution block without batch normalization.
x = layers.Conv2D(filters=filters, kernel_size=7, strides=1, padding="same")(inputs)
x = layers.LeakyReLU(0.2)(x)
# Second convolution block with batch normalization.
x = layers.Conv2D(filters=filters, kernel_size=1, strides=1, padding="same")(x)
x = layers.LeakyReLU(0.2)(x)
x = layers.BatchNormalization()(x)
# Intermediate resizing as a bottleneck.
bottleneck = layers.Resizing(
*TARGET_SIZE, interpolation=interpolation
)(x)
# Residual passes.
for _ in range(num_res_blocks):
x = res_block(bottleneck)
# Projection.
x = layers.Conv2D(
filters=filters, kernel_size=3, strides=1, padding="same", use_bias=False
)(x)
x = layers.BatchNormalization()(x)
# Skip connection.
x = layers.Add()([bottleneck, x])
# Final resized image.
x = layers.Conv2D(filters=3, kernel_size=7, strides=1, padding="same")(x)
final_resize = layers.Add()([naive_resize, x])
return tf.keras.Model(inputs, final_resize, name="learnable_resizer")
learnable_resizer = get_learnable_resizer()
Explanation: Define the learnable resizer utilities
The figure below (courtesy: Learning to Resize Images for Computer Vision Tasks)
presents the structure of the learnable resizing module:
End of explanation
sample_images, _ = next(iter(train_ds))
plt.figure(figsize=(16, 10))
for i, image in enumerate(sample_images[:6]):
image = image / 255
ax = plt.subplot(3, 4, 2 * i + 1)
plt.title("Input Image")
plt.imshow(image.numpy().squeeze())
plt.axis("off")
ax = plt.subplot(3, 4, 2 * i + 2)
resized_image = learnable_resizer(image[None, ...])
plt.title("Resized Image")
plt.imshow(resized_image.numpy().squeeze())
plt.axis("off")
Explanation: Visualize the outputs of the learnable resizing module
Here, we visualize how the resized images would look like after being passed through the
random weights of the resizer.
End of explanation
def get_model():
backbone = tf.keras.applications.DenseNet121(
weights=None,
include_top=True,
classes=2,
input_shape=((TARGET_SIZE[0], TARGET_SIZE[1], 3)),
)
backbone.trainable = True
inputs = layers.Input((INP_SIZE[0], INP_SIZE[1], 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
x = learnable_resizer(x)
outputs = backbone(x)
return tf.keras.Model(inputs, outputs)
Explanation: Model building utility
End of explanation
model = get_model()
model.compile(
loss=keras.losses.CategoricalCrossentropy(label_smoothing=0.1),
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(train_ds, validation_data=validation_ds, epochs=EPOCHS)
Explanation: The structure of the learnable image resizer module allows for flexible integrations with
different vision models.
Compile and train our model with learnable resizer
End of explanation
plt.figure(figsize=(16, 10))
for i, image in enumerate(sample_images[:6]):
image = image / 255
ax = plt.subplot(3, 4, 2 * i + 1)
plt.title("Input Image")
plt.imshow(image.numpy().squeeze())
plt.axis("off")
ax = plt.subplot(3, 4, 2 * i + 2)
resized_image = learnable_resizer(image[None, ...])
plt.title("Resized Image")
plt.imshow(resized_image.numpy().squeeze() / 10)
plt.axis("off")
Explanation: Visualize the outputs of the trained visualizer
End of explanation |
9,041 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: ユニバーサルセンテンスエンコーダー Lite の実演
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: TF-Hub からモジュールを読み込む
Step3: TF-Hub モジュールから SentencePiece モデルを読み込む
SentencePiece モデルは、モジュールのアセットに格納されています。プロセッサを初期化するには、このモデルが読み込まれている必要があります。
Step4: いくつかの例を使ってモジュールをテストする
Step5: セマンティックテキストの類似性(STS)タスクの例
ユニバーサルセンテンスエンコーダーによって生成される埋め込みは、おおよそ正規化されています。2 つの文章の意味的類似性は、エンコーディングの内積として簡単に計算することができます。
Step6: 類似性の視覚化
ここでは、ヒートマップに類似性を表示します。最終的なグラフは 9x9 の行列で、各エントリ [i, j] は、文章 i と j のエンコーディングの内積に基づいて色付けされます。
Step7: 評価
Step8: 評価グラフの構築
Step10: 文章埋め込みの評価 | Python Code:
# Copyright 2018 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2018 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
# Install seaborn for pretty visualizations
!pip3 install --quiet seaborn
# Install SentencePiece package
# SentencePiece package is needed for Universal Sentence Encoder Lite. We'll
# use it for all the text processing and sentence feature ID lookup.
!pip3 install --quiet sentencepiece
from absl import logging
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
import tensorflow_hub as hub
import sentencepiece as spm
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import re
import seaborn as sns
Explanation: ユニバーサルセンテンスエンコーダー Lite の実演
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a> </td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a> </td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/semantic_similarity_with_tf_hub_universal_encoder_lite.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a> </td>
<td> <a href="https://tfhub.dev/google/universal-sentence-encoder-lite/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td>
</table>
この Colab では、文章の類似性タスクにユニバーサルセンテンスエンコーダー Lite を使用する方法を説明します。このモジュールは、ユニバーサルセンテンスエンコーダーによく似ていますが、入力文章に SentencePiece 処理を実行する必要があります。
ユニバーサルセンテンスエンコーダーでは、これまで各単語の埋め込みをルックアップしてきたのと同じくらい簡単に文章レベルの埋め込みを取得することができます。取得された文章埋め込みは、文章レベルでの意味の類似性を計算するためだけではなく、少ない教師ありトレーニングデータを使うことで、ダウンストリームの分類タスクのパフォーマンスを改善するために使用することができます。
はじめに
セットアップ
End of explanation
module = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-lite/2")
input_placeholder = tf.sparse_placeholder(tf.int64, shape=[None, None])
encodings = module(
inputs=dict(
values=input_placeholder.values,
indices=input_placeholder.indices,
dense_shape=input_placeholder.dense_shape))
Explanation: TF-Hub からモジュールを読み込む
End of explanation
with tf.Session() as sess:
spm_path = sess.run(module(signature="spm_path"))
sp = spm.SentencePieceProcessor()
with tf.io.gfile.GFile(spm_path, mode="rb") as f:
sp.LoadFromSerializedProto(f.read())
print("SentencePiece model loaded at {}.".format(spm_path))
def process_to_IDs_in_sparse_format(sp, sentences):
# An utility method that processes sentences with the sentence piece processor
# 'sp' and returns the results in tf.SparseTensor-similar format:
# (values, indices, dense_shape)
ids = [sp.EncodeAsIds(x) for x in sentences]
max_len = max(len(x) for x in ids)
dense_shape=(len(ids), max_len)
values=[item for sublist in ids for item in sublist]
indices=[[row,col] for row in range(len(ids)) for col in range(len(ids[row]))]
return (values, indices, dense_shape)
Explanation: TF-Hub モジュールから SentencePiece モデルを読み込む
SentencePiece モデルは、モジュールのアセットに格納されています。プロセッサを初期化するには、このモデルが読み込まれている必要があります。
End of explanation
# Compute a representation for each message, showing various lengths supported.
word = "Elephant"
sentence = "I am a sentence for which I would like to get its embedding."
paragraph = (
"Universal Sentence Encoder embeddings also support short paragraphs. "
"There is no hard limit on how long the paragraph is. Roughly, the longer "
"the more 'diluted' the embedding will be.")
messages = [word, sentence, paragraph]
values, indices, dense_shape = process_to_IDs_in_sparse_format(sp, messages)
# Reduce logging output.
logging.set_verbosity(logging.ERROR)
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(
encodings,
feed_dict={input_placeholder.values: values,
input_placeholder.indices: indices,
input_placeholder.dense_shape: dense_shape})
for i, message_embedding in enumerate(np.array(message_embeddings).tolist()):
print("Message: {}".format(messages[i]))
print("Embedding size: {}".format(len(message_embedding)))
message_embedding_snippet = ", ".join(
(str(x) for x in message_embedding[:3]))
print("Embedding: [{}, ...]\n".format(message_embedding_snippet))
Explanation: いくつかの例を使ってモジュールをテストする
End of explanation
def plot_similarity(labels, features, rotation):
corr = np.inner(features, features)
sns.set(font_scale=1.2)
g = sns.heatmap(
corr,
xticklabels=labels,
yticklabels=labels,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(labels, rotation=rotation)
g.set_title("Semantic Textual Similarity")
def run_and_plot(session, input_placeholder, messages):
values, indices, dense_shape = process_to_IDs_in_sparse_format(sp,messages)
message_embeddings = session.run(
encodings,
feed_dict={input_placeholder.values: values,
input_placeholder.indices: indices,
input_placeholder.dense_shape: dense_shape})
plot_similarity(messages, message_embeddings, 90)
Explanation: セマンティックテキストの類似性(STS)タスクの例
ユニバーサルセンテンスエンコーダーによって生成される埋め込みは、おおよそ正規化されています。2 つの文章の意味的類似性は、エンコーディングの内積として簡単に計算することができます。
End of explanation
messages = [
# Smartphones
"I like my phone",
"My phone is not good.",
"Your cellphone looks great.",
# Weather
"Will it snow tomorrow?",
"Recently a lot of hurricanes have hit the US",
"Global warming is real",
# Food and health
"An apple a day, keeps the doctors away",
"Eating strawberries is healthy",
"Is paleo better than keto?",
# Asking about age
"How old are you?",
"what is your age?",
]
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
run_and_plot(session, input_placeholder, messages)
Explanation: 類似性の視覚化
ここでは、ヒートマップに類似性を表示します。最終的なグラフは 9x9 の行列で、各エントリ [i, j] は、文章 i と j のエンコーディングの内積に基づいて色付けされます。
End of explanation
import pandas
import scipy
import math
def load_sts_dataset(filename):
# Loads a subset of the STS dataset into a DataFrame. In particular both
# sentences and their human rated similarity score.
sent_pairs = []
with tf.gfile.GFile(filename, "r") as f:
for line in f:
ts = line.strip().split("\t")
# (sent_1, sent_2, similarity_score)
sent_pairs.append((ts[5], ts[6], float(ts[4])))
return pandas.DataFrame(sent_pairs, columns=["sent_1", "sent_2", "sim"])
def download_and_load_sts_data():
sts_dataset = tf.keras.utils.get_file(
fname="Stsbenchmark.tar.gz",
origin="http://ixa2.si.ehu.es/stswiki/images/4/48/Stsbenchmark.tar.gz",
extract=True)
sts_dev = load_sts_dataset(
os.path.join(os.path.dirname(sts_dataset), "stsbenchmark", "sts-dev.csv"))
sts_test = load_sts_dataset(
os.path.join(
os.path.dirname(sts_dataset), "stsbenchmark", "sts-test.csv"))
return sts_dev, sts_test
sts_dev, sts_test = download_and_load_sts_data()
Explanation: 評価: STS(セマンティックテキストの類似性)ベンチマーク
STS ベンチマークは、文章埋め込みを使って計算された類似性スコアが人の判定に適合する程度の本質的評価です。ベンチマークでは、システムは多様な文章ペアに対して類似性スコアを返す必要があります。その後で、ピアソン相関を使用して、人の判定に対して機械の類似性スコアの質が評価されます。
データのダウンロード
End of explanation
sts_input1 = tf.sparse_placeholder(tf.int64, shape=(None, None))
sts_input2 = tf.sparse_placeholder(tf.int64, shape=(None, None))
# For evaluation we use exactly normalized rather than
# approximately normalized.
sts_encode1 = tf.nn.l2_normalize(
module(
inputs=dict(values=sts_input1.values,
indices=sts_input1.indices,
dense_shape=sts_input1.dense_shape)),
axis=1)
sts_encode2 = tf.nn.l2_normalize(
module(
inputs=dict(values=sts_input2.values,
indices=sts_input2.indices,
dense_shape=sts_input2.dense_shape)),
axis=1)
sim_scores = -tf.acos(tf.reduce_sum(tf.multiply(sts_encode1, sts_encode2), axis=1))
Explanation: 評価グラフの構築
End of explanation
#@title Choose dataset for benchmark
dataset = sts_dev #@param ["sts_dev", "sts_test"] {type:"raw"}
values1, indices1, dense_shape1 = process_to_IDs_in_sparse_format(sp, dataset['sent_1'].tolist())
values2, indices2, dense_shape2 = process_to_IDs_in_sparse_format(sp, dataset['sent_2'].tolist())
similarity_scores = dataset['sim'].tolist()
def run_sts_benchmark(session):
Returns the similarity scores
scores = session.run(
sim_scores,
feed_dict={
sts_input1.values: values1,
sts_input1.indices: indices1,
sts_input1.dense_shape: dense_shape1,
sts_input2.values: values2,
sts_input2.indices: indices2,
sts_input2.dense_shape: dense_shape2,
})
return scores
with tf.Session() as session:
session.run(tf.global_variables_initializer())
session.run(tf.tables_initializer())
scores = run_sts_benchmark(session)
pearson_correlation = scipy.stats.pearsonr(scores, similarity_scores)
print('Pearson correlation coefficient = {0}\np-value = {1}'.format(
pearson_correlation[0], pearson_correlation[1]))
Explanation: 文章埋め込みの評価
End of explanation |
9,042 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Delayed Extra Sources in NuPyCEE
Created by Benoit Côté
This notebook introduces the general delayed-extra set of parameters in NuPyCEE that allows to include any enrichment source that requires a delay-time distribution function (DTD). For example, this implementation can be used for different SNe Ia channels, for compact binary mergers, and for exotic production sites involving interactions between stellar objects.
This notebook focuses on SYGMA, but the implementation can also be used with OMEGA.
1. Implementation
Here are the basic simple stellar population (SSP) inputs that need to be provided
Step1: 2.1 One deyaled extra source
Step2: Let's test the total number of events and the total mass ejected by those events.
Step3: Let's plot the DTD
Step4: 2.2 Two deyaled extra sources | Python Code:
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
from NuPyCEE import sygma
Explanation: Delayed Extra Sources in NuPyCEE
Created by Benoit Côté
This notebook introduces the general delayed-extra set of parameters in NuPyCEE that allows to include any enrichment source that requires a delay-time distribution function (DTD). For example, this implementation can be used for different SNe Ia channels, for compact binary mergers, and for exotic production sites involving interactions between stellar objects.
This notebook focuses on SYGMA, but the implementation can also be used with OMEGA.
1. Implementation
Here are the basic simple stellar population (SSP) inputs that need to be provided:
- The DTD function (rate of occurence as a function of time),
- The total number of events per unit of M$_\odot$ formed,
- The yields (abundance pattern) associated with an event,
- The total mass ejected by an event (which will multiply the yields).
Here is how these inputs are implemented in NuPyCEE (see below for examples):
delayed_extra_dtd[ nb_sources ][ nb_Z ]
nb_sources is the number of different input astrophysical site (e.g., SNe Ia, neutron star mergers, iRAWDs, etc.).
nb_Z is the number of available metallicities available in the delayed-extra yields table.
delayed_extra_dtd[i][j] is a 2D array in the form of [ number_of_times ][ 0-time, 1-rate ].
The fact that we use 2D arrays provides maximum flexibility. We can then use analytical formulas, population synthesis predictions, outputs from simulations, etc..
delayed_extra_dtd_norm[ nb_sources ][ nb_Z ]
Total number of delayed sources occurring per M$_\odot$ formed.
delayed_extra_yields[ nb_sources ]
Yields table path (string) for each source.
There is no [ nb_Z ] since the yields table can contain many metallicities.
delayed_extra_yields_norm[ nb_sources ][ nb_Z ]
Fraction (float) of the yield table that will be eject per event, for each source and metallicity. This is the total mass ejected per event if the yields are in mass fraction (normalized to 1).
2. Example with SYGMA
End of explanation
# Create the DTD and yields information for the extra source
# ==========================================================
# Event rate [yr^-1] as a function of time [yr].
# Times need to be in order. No event will occurs before the lowest time and after the largest time.
# The code will interpolate the data point provided (in linear or in log-log space).
t = [0.0, 1.0e9, 2.0e9] # Can be any length.
R = [1.0, 3.0, 2.0] # This is only the shape of the DTD, as it will be re-normalized.
# Build the input DTD array
dtd = []
for i in range(0,len(t)):
dtd.append([t[i], R[i]])
# Add the DTD array in the delayed_extra_dtd array.
delayed_extra_dtd = [[dtd]]
# [[ ]] for the indexes for the number of sources (here 1) and metallicities (here 1)
# Define the total number of event per unit of Msun formed. This will normalize the DTD.
delayed_extra_dtd_norm = [[1.0e-1]]
# [[ ]] for the indexes for the number of sources (here 1) and metallicities (here 1)
# Define the yields path for the extra source
delayed_extra_yields = ['yield_tables/r_process_arnould_2007.txt']
# [ ] and not [[ ]] because the nb_Z is in the yields table as in SN Ia yields
# See yield_tables/sn1a_ivo12_stable_z.txt for an example of such yields template.
# Define the total mass ejected by an extra source
delayed_extra_yields_norm = [[1.0e-3]]
# Run SYGMA, one SSP with a total mass of 1 Msun at Z = 0.02
mgal = 1.0
s1 = sygma.sygma(iniZ=0.02, delayed_extra_dtd=delayed_extra_dtd, delayed_extra_dtd_norm=delayed_extra_dtd_norm,\
delayed_extra_yields=delayed_extra_yields, delayed_extra_yields_norm=delayed_extra_yields_norm, mgal=mgal,\
dt=1e8, special_timesteps=-1, tend=1.1*t[-1])
Explanation: 2.1 One deyaled extra source
End of explanation
# Predicted number of events
N_pred = delayed_extra_dtd_norm[0][0] * mgal
# Predicted mass ejected by events
M_ej_pred = N_pred * delayed_extra_yields_norm[0][0]
# Calculated number of events
N_sim = sum(s1.delayed_extra_numbers[0])
# Calculated mass ejected by events
M_ej_sim = sum(s1.ymgal_delayed_extra[0][-1])
# Print the test
print ('The following numbers should be 1.0')
print (' Number of events (predicted/calculated):', N_pred / N_sim)
print (' Mass ejected by events (predicted/calculated):', M_ej_pred / M_ej_sim)
Explanation: Let's test the total number of events and the total mass ejected by those events.
End of explanation
%matplotlib nbagg
plt.plot(s1.history.age[1:], np.array(s1.delayed_extra_numbers[0])/s1.history.timesteps)
plt.xlim(0, 1.1*t[-1])
plt.xlabel('Time [yr]', fontsize=12)
plt.ylabel('Rate [event yr$^{-1}$]', fontsize=12)
Explanation: Let's plot the DTD
End of explanation
# Create the DTD and yields information for the second extra source
# =================================================================
# Event rate [yr^-1] as a function of time [yr].
t2 = [1.4e9, 1.6e9, 1.8e9]
R2 = [4.0, 1.0, 4.0]
# Build the input DTD array
dtd2 = []
for i in range(0,len(t2)):
dtd2.append([t2[i], R2[i]])
# Add the DTD array in the delayed_extra_dtd array.
delayed_extra_dtd = [[dtd],[dtd2]]
# Define the total number of event per unit of Msun formed. This will normalize the DTD.
delayed_extra_dtd_norm = [[1.0e-2],[5.0e-3]]
# Define the yields path for the extra source
delayed_extra_yields = ['yield_tables/r_process_arnould_2007.txt','yield_tables/r_process_arnould_2007.txt']
# Define the total mass ejected by an extra source
delayed_extra_yields_norm = [[1.0e-3],[2.0e-3]]
# Run SYGMA, one SSP with a total mass of 1 Msun at Z = 0.02
mgal = 1.0
s2 = sygma.sygma(iniZ=0.02, delayed_extra_dtd=delayed_extra_dtd, delayed_extra_dtd_norm=delayed_extra_dtd_norm,\
delayed_extra_yields=delayed_extra_yields, delayed_extra_yields_norm=delayed_extra_yields_norm, mgal=mgal,\
dt=1e8, special_timesteps=-1, tend=1.1*t[-1])
%matplotlib nbagg
plt.plot(s2.history.age[1:], np.array(s2.delayed_extra_numbers[0])/s2.history.timesteps)
plt.plot(s2.history.age[1:], np.array(s2.delayed_extra_numbers[1])/s2.history.timesteps)
plt.xlim(0, 1.1*t[-1])
plt.xlabel('Time [yr]', fontsize=12)
plt.ylabel('Rate [event yr$^{-1}$]', fontsize=12)
Explanation: 2.2 Two deyaled extra sources
End of explanation |
9,043 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
$\newcommand{\G}{\mathcal{G}}$
$\newcommand{\V}{\mathcal{V}}$
$\newcommand{\E}{\mathcal{E}}$
$\newcommand{\R}{\mathbb{R}}$
This notebook shows how to apply our graph ConvNet (paper & code), or any other, to your structured or unstructured data. For this example, we assume that we have $n$ samples $x_i \in \R^{d_x}$ arranged in a data matrix $$X = [x_1, ..., x_n]^T \in \R^{n \times d_x}.$$ Each sample $x_i$ is associated with a vector $y_i \in \R^{d_y}$ for a regression task or a label $y_i \in {0,\ldots,C}$ for a classification task.
From there, we'll structure our data with a graph $\G = (\V, \E, A)$ where $\V$ is the set of $d_x = |\V|$ vertices, $\E$ is the set of edges and $A \in \R^{d_x \times d_x}$ is the adjacency matrix. That matrix represents the weight of each edge, i.e. $A_{i,j}$ is the weight of the edge connecting $v_i \in \V$ to $v_j \in \V$. The weights of that feature graph thus represent pairwise relationships between features $i$ and $j$. We call that regime signal classification / regression, as the samples $x_i$ to be classified or regressed are graph signals.
Other modelling possibilities include
Step1: 1 Data
For the purpose of the demo, let's create a random data matrix $X \in \R^{n \times d_x}$ and somehow infer a label $y_i = f(x_i)$.
Step2: Then split this dataset into training, validation and testing sets.
Step3: 2 Graph
The second thing we need is a graph between features, i.e. an adjacency matrix $A \in \mathbb{R}^{d_x \times d_x}$.
Structuring data with graphs is very flexible
Step4: To be able to pool graph signals, we need first to coarsen the graph, i.e. to find which vertices to group together. At the end we'll have multiple graphs, like a pyramid, each at one level of resolution. The finest graph is where the input data lies, the coarsest graph is where the data at the output of the graph convolutional layers lie. That data, of reduced spatial dimensionality, can then be fed to a fully connected layer.
The parameter here is the number of times to coarsen the graph. Each coarsening approximately reduces the size of the graph by a factor two. Thus if you want a pooling of size 4 in the first layer followed by a pooling of size 2 in the second, you'll need to coarsen $\log_2(4+2) = 3$ times.
After coarsening we rearrange the vertices (and add fake vertices) such that pooling a graph signal is analog to pooling a 1D signal. See the paper for details.
Step5: We finally need to compute the graph Laplacian $L$ for each of our graphs (the original and the coarsened versions), defined by their adjacency matrices $A$. The sole parameter here is the type of Laplacian, e.g. the combinatorial Laplacian, the normalized Laplacian or the random walk Laplacian.
Step6: 3 Graph ConvNet
Here we apply the graph convolutional neural network to signals lying on graphs. After designing the architecture and setting the hyper-parameters, the model takes as inputs the data matrix $X$, the target $y$ and a list of graph Laplacians $L$, one per coarsening level.
The data, architecture and hyper-parameters are absolutely not engineered to showcase performance. Its sole purpose is to illustrate usage and functionality.
Step7: 4 Evaluation
We often want to monitor | Python Code:
from lib import models, graph, coarsening, utils
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Introduction
$\newcommand{\G}{\mathcal{G}}$
$\newcommand{\V}{\mathcal{V}}$
$\newcommand{\E}{\mathcal{E}}$
$\newcommand{\R}{\mathbb{R}}$
This notebook shows how to apply our graph ConvNet (paper & code), or any other, to your structured or unstructured data. For this example, we assume that we have $n$ samples $x_i \in \R^{d_x}$ arranged in a data matrix $$X = [x_1, ..., x_n]^T \in \R^{n \times d_x}.$$ Each sample $x_i$ is associated with a vector $y_i \in \R^{d_y}$ for a regression task or a label $y_i \in {0,\ldots,C}$ for a classification task.
From there, we'll structure our data with a graph $\G = (\V, \E, A)$ where $\V$ is the set of $d_x = |\V|$ vertices, $\E$ is the set of edges and $A \in \R^{d_x \times d_x}$ is the adjacency matrix. That matrix represents the weight of each edge, i.e. $A_{i,j}$ is the weight of the edge connecting $v_i \in \V$ to $v_j \in \V$. The weights of that feature graph thus represent pairwise relationships between features $i$ and $j$. We call that regime signal classification / regression, as the samples $x_i$ to be classified or regressed are graph signals.
Other modelling possibilities include:
1. Using a data graph, i.e. an adjacency matrix $A \in \R^{n \times n}$ which represents pairwise relationships between samples $x_i \in \R^{d_x}$. The problem is here to predict a graph signal $y \in \R^{n \times d_y}$ given a graph characterized by $A$ and some graph signals $X \in \R^{n \times d_x}$. We call that regime node classification / regression, as we classify or regress nodes instead of signals.
2. Another problem of interest is whole graph classification, with or without signals on top. We'll call that third regime graph classification / regression. The problem here is to classify or regress a whole graph $A_i \in \R^{n \times n}$ (with or without an associated data matrix $X_i \in \R^{n \times d_x}$) into $y_i \in \R^{d_y}$. In case we have no signal, we can use a constant vector $X_i = 1_n$ of size $n$.
End of explanation
d = 100 # Dimensionality.
n = 10000 # Number of samples.
c = 5 # Number of feature communities.
# Data matrix, structured in communities (feature-wise).
X = np.random.normal(0, 1, (n, d)).astype(np.float32)
X += np.linspace(0, 1, c).repeat(d // c)
# Noisy non-linear target.
w = np.random.normal(0, .02, d)
t = X.dot(w) + np.random.normal(0, .001, n)
t = np.tanh(t)
plt.figure(figsize=(15, 5))
plt.plot(t, '.')
# Classification.
y = np.ones(t.shape, dtype=np.uint8)
y[t > t.mean() + 0.4 * t.std()] = 0
y[t < t.mean() - 0.4 * t.std()] = 2
print('Class imbalance: ', np.unique(y, return_counts=True)[1])
Explanation: 1 Data
For the purpose of the demo, let's create a random data matrix $X \in \R^{n \times d_x}$ and somehow infer a label $y_i = f(x_i)$.
End of explanation
n_train = n // 2
n_val = n // 10
X_train = X[:n_train]
X_val = X[n_train:n_train+n_val]
X_test = X[n_train+n_val:]
y_train = y[:n_train]
y_val = y[n_train:n_train+n_val]
y_test = y[n_train+n_val:]
Explanation: Then split this dataset into training, validation and testing sets.
End of explanation
dist, idx = graph.distance_scipy_spatial(X_train.T, k=10, metric='euclidean')
A = graph.adjacency(dist, idx).astype(np.float32)
assert A.shape == (d, d)
print('d = |V| = {}, k|V| < |E| = {}'.format(d, A.nnz))
plt.spy(A, markersize=2, color='black');
Explanation: 2 Graph
The second thing we need is a graph between features, i.e. an adjacency matrix $A \in \mathbb{R}^{d_x \times d_x}$.
Structuring data with graphs is very flexible: it can accomodate both structured and unstructured data.
1. Structured data.
1. The data is structured by an Euclidean domain, e.g. $x_i$ represents an image, a sound or a video. We can use a classical ConvNet with 1D, 2D or 3D convolutions or a graph ConvNet with a line or grid graph (however losing the orientation).
2. The data is structured by a graph, e.g. the data lies on a transportation, energy, brain or social network.
2. Unstructured data. We could use a fully connected network, but the learning and computational complexities are gonna be large. An alternative is to construct a sparse similarity graph between features (or between samples) and use a graph ConvNet, effectively structuring the data and drastically reducing the number of parameters through weight sharing. As for classical ConvNets, the number of parameters are independent of the input size.
There are many ways, supervised or unsupervised, to construct a graph given some data. And better the graph, better the performance ! For this example we'll define the adjacency matrix as a simple similarity measure between features. Below are the choices one has to make when constructing such a graph.
1. The distance function. We'll use the Euclidean distance $d_{ij} = \|x_i - x_j\|2$.
2. The kernel. We'll use the Gaussian kernel $a{ij} = \exp(d_{ij}^2 / \sigma^2)$.
3. The type of graph. We'll use a $k$ nearest neigbors (kNN) graph.
End of explanation
graphs, perm = coarsening.coarsen(A, levels=3, self_connections=False)
X_train = coarsening.perm_data(X_train, perm)
X_val = coarsening.perm_data(X_val, perm)
X_test = coarsening.perm_data(X_test, perm)
Explanation: To be able to pool graph signals, we need first to coarsen the graph, i.e. to find which vertices to group together. At the end we'll have multiple graphs, like a pyramid, each at one level of resolution. The finest graph is where the input data lies, the coarsest graph is where the data at the output of the graph convolutional layers lie. That data, of reduced spatial dimensionality, can then be fed to a fully connected layer.
The parameter here is the number of times to coarsen the graph. Each coarsening approximately reduces the size of the graph by a factor two. Thus if you want a pooling of size 4 in the first layer followed by a pooling of size 2 in the second, you'll need to coarsen $\log_2(4+2) = 3$ times.
After coarsening we rearrange the vertices (and add fake vertices) such that pooling a graph signal is analog to pooling a 1D signal. See the paper for details.
End of explanation
L = [graph.laplacian(A, normalized=True) for A in graphs]
graph.plot_spectrum(L)
Explanation: We finally need to compute the graph Laplacian $L$ for each of our graphs (the original and the coarsened versions), defined by their adjacency matrices $A$. The sole parameter here is the type of Laplacian, e.g. the combinatorial Laplacian, the normalized Laplacian or the random walk Laplacian.
End of explanation
params = dict()
params['dir_name'] = 'demo'
params['num_epochs'] = 40
params['batch_size'] = 100
params['eval_frequency'] = 200
# Building blocks.
params['filter'] = 'chebyshev5'
params['brelu'] = 'b1relu'
params['pool'] = 'apool1'
# Number of classes.
C = y.max() + 1
assert C == np.unique(y).size
# Architecture.
params['F'] = [32, 64] # Number of graph convolutional filters.
params['K'] = [20, 20] # Polynomial orders.
params['p'] = [4, 2] # Pooling sizes.
params['M'] = [512, C] # Output dimensionality of fully connected layers.
# Optimization.
params['regularization'] = 5e-4
params['dropout'] = 1
params['learning_rate'] = 1e-3
params['decay_rate'] = 0.95
params['momentum'] = 0.9
params['decay_steps'] = n_train / params['batch_size']
model = models.cgcnn(L, **params)
accuracy, loss, t_step = model.fit(X_train, y_train, X_val, y_val)
Explanation: 3 Graph ConvNet
Here we apply the graph convolutional neural network to signals lying on graphs. After designing the architecture and setting the hyper-parameters, the model takes as inputs the data matrix $X$, the target $y$ and a list of graph Laplacians $L$, one per coarsening level.
The data, architecture and hyper-parameters are absolutely not engineered to showcase performance. Its sole purpose is to illustrate usage and functionality.
End of explanation
fig, ax1 = plt.subplots(figsize=(15, 5))
ax1.plot(accuracy, 'b.-')
ax1.set_ylabel('validation accuracy', color='b')
ax2 = ax1.twinx()
ax2.plot(loss, 'g.-')
ax2.set_ylabel('training loss', color='g')
plt.show()
print('Time per step: {:.2f} ms'.format(t_step*1000))
res = model.evaluate(X_test, y_test)
print(res[0])
Explanation: 4 Evaluation
We often want to monitor:
1. The convergence, i.e. the training loss and the classification accuracy on the validation set.
2. The performance, i.e. the classification accuracy on the testing set (to be compared with the training set accuracy to spot overfitting).
The model_perf class in utils.py can be used to compactly evaluate multiple models.
End of explanation |
9,044 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Time and coordinates
Step1: We are going to use astropy to find out whether the Large Magellanic Cloud (LMC) is visible from the a given observatory at a given time and date.
In the process we need to manipulate different coordinates and time definitions.
Step2: Let's start by getting the coordinates of the LMC
Step3: lmc_center is an instance of a class astropy.coordinates.sky_coordinate.SkyCoord
Step4: The full list of attributes and methods can be found using dir()
Step5: An optional way to initialize an object belonging to the class SkyCoord would be
python
option = SkyCoord('0h39m00', '0d53m1s', frame='icrs')
To find out whether the LMC will be visible from the observatory, we have to define
the observatory location and the time of the year.
Let's assume that we are going to observe from SALT (Southern African Large Telescope).
Step6: You can get a list of observatory locations with
Step7: We now have all the elements to compute the Altitude + Azimuth coordinates of the LMC at SALT location on November 11th 2017 at 9PM UTC. | Python Code:
import matplotlib
matplotlib.use('Agg')
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Time and coordinates
End of explanation
import astropy.units as u
from astropy.time import Time
from astropy.coordinates import SkyCoord, EarthLocation, AltAz
Explanation: We are going to use astropy to find out whether the Large Magellanic Cloud (LMC) is visible from the a given observatory at a given time and date.
In the process we need to manipulate different coordinates and time definitions.
End of explanation
lmc_center = SkyCoord.from_name('LMC')
lmc_center
Explanation: Let's start by getting the coordinates of the LMC
End of explanation
type(lmc_center)
Explanation: lmc_center is an instance of a class astropy.coordinates.sky_coordinate.SkyCoord
End of explanation
dir(lmc_center)
# To get the ra and dec we print the corresponding attribute
print(lmc_center.ra, lmc_center.dec) # units of degrees for RA
print(lmc_center.ra.hour, lmc_center.dec) # units of hours for RA
Explanation: The full list of attributes and methods can be found using dir()
End of explanation
SALT = EarthLocation.of_site("Southern African Large Telescope")
SALT.lat, SALT.lon, SALT.height
Explanation: An optional way to initialize an object belonging to the class SkyCoord would be
python
option = SkyCoord('0h39m00', '0d53m1s', frame='icrs')
To find out whether the LMC will be visible from the observatory, we have to define
the observatory location and the time of the year.
Let's assume that we are going to observe from SALT (Southern African Large Telescope).
End of explanation
time = Time('2017-11-11 21:00:00') # That's in Universal Time Coordinated!
time
Explanation: You can get a list of observatory locations with:
python
EarthLocation.get_site_names()
If your observatory is not listed in astropy you can initialize its location using
python
my_observatory = EarthLocation(lat=4.0*u.deg, lon=-75.0*u.deg, height=4000*u.m)
Now let's fix the observation date and time. We are going to use a different class for that
End of explanation
lmg_altaz = lmc_center.transform_to(AltAz(obstime=time,location=SALT))
print(lmg_altaz.az, lmg_altaz.alt)
Explanation: We now have all the elements to compute the Altitude + Azimuth coordinates of the LMC at SALT location on November 11th 2017 at 9PM UTC.
End of explanation |
9,045 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LDA预处理
方案 1
将每张订单作为一个document
方案2
将每个用户作为一个document
将每个product作为一个Document
将每个aisle作为一个Document
将每个department作为Document
方案3
product name
无结构文本
<font color=red>方案2 part 1</font>
Step1: <font color=red>方案2 part 2 </font>
Step2: 方案2 <font color=red> Gensim </font>
per word bound
困惑度小,模型好,perplexity $2^{-bound}$
bound大,模型好
Step3: 方案3 需要处理无结构化的文本
Step4: LDA Topic数目选择
number of topics
Step5: <font color=red>Post-LDA Topic内容</font>
Step6: 方案2 每个用户的<font color=red>Embeded Topic Space</font>表达
用户对应一个Topic, 各个分量的scale都比较小, 是一个分布,l1范数和为1
商品对应一个Topic,各个分量的scale都很大,<font color=red>Normalize Components</font>
<font color=blue> option 1 </font>K_u NN 提供Candidate商品
欧氏距离
概率
<font color=blue> option 2 </font>embed vector直接作为特征
Time Weighted?
Step7: 基于方案2的Nearest Neighbors Search
每个人一个NN,找到与user topic最近的商品
分析pred中None的数量
avg_reorder_len, component无normalize, 欧氏聚类时的得分:f1score
Step8: 方案2 结果评估
Step9: 方案1 每张订单的Embeded Topic Space表达 | Python Code:
#方案2
orders = tle.get_orders()
users_orders = tle.get_users_orders('prior')
# 将product_id转换为str
users_products_matrix = users_orders.groupby(['user_id'])['product_id'].apply(utils.series_to_str)
# 构造vocabulary
tf = CountVectorizer(analyzer = 'word', lowercase = False, max_df=0.95, min_df=2,)
tf_matrix = tf.fit_transform(users_products_matrix.values)
tf_feature_names = tf.get_feature_names()
#with open(DATA_DIR + 'user_tf_matrix', 'wb') as f:
# pickle.dump(tf_matrix, f, pickle.HIGHEST_PROTOCOL)
products = tle.get_items('products')
aisles = tle.get_items('aisles')
departments = tle.get_items('departments')
product_a = pd.merge(products, aisles, on = ['aisle_id'], how = 'left')
product_ad = pd.merge(product_a, departments, on = ['department_id'], how = 'left')
del product_ad['aisle_id']
del product_ad['department_id']
product_ad['chain_product_name'] = product_ad['department'] + ' _ ' +\
product_ad['aisle'] + ' _ ' +\
product_ad['product_name']
tf_product_names = [] #商品名列表
for pid in tf_feature_names:
tf_product_names.append(product_ad[product_ad.product_id == int(pid)]['chain_product_name'].values[0])
tf_info = pd.DataFrame({'pid':tf_feature_names, 'pname':tf_product_names})
#with open(DATA_DIR + 'user_tf_matrix_info.pkl', 'wb') as f:
# pickle.dump(tf_info, f, pickle.HIGHEST_PROTOCOL)
Explanation: LDA预处理
方案 1
将每张订单作为一个document
方案2
将每个用户作为一个document
将每个product作为一个Document
将每个aisle作为一个Document
将每个department作为Document
方案3
product name
无结构文本
<font color=red>方案2 part 1</font>
End of explanation
orders = tle.get_orders()
users_orders = tle.get_users_orders('prior')
def pad_tf(pad, users_orders):
# 将product_id转换为str
pad_users_matrix = users_orders.groupby([pad])['user_id'].apply(utils.series_to_str)
# 构造vocabulary
tf = CountVectorizer(analyzer = 'word', lowercase = False, max_df=0.95, min_df=2,)
tf_matrix = tf.fit_transform(pad_users_matrix.values)
tf_feature_names = tf.get_feature_names()
tf_info = pd.DataFrame({'term_id':np.arange(len(tf_feature_names)), 'user_id':tf_feature_names})
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_tf_matrix', 'wb') as f:
pickle.dump(tf_matrix, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_tf_info', 'wb') as f:
pickle.dump(tf_info, f, pickle.HIGHEST_PROTOCOL)
for pad in ['product_id', 'aisle_id', 'department_id']:
pad_tf(pad, users_orders)
pad = 'aisle_id'
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_tf_matrix', 'rb') as f:
a_tf_matrix = pickle.load(f)
a_tf_matrix.shape
pad = 'product_id'
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_tf_matrix', 'rb') as f:
p_tf_matrix = pickle.load(f)
p_tf_matrix.shape
Explanation: <font color=red>方案2 part 2 </font>
End of explanation
from gensim.models.ldamulticore import LdaMulticore
from gensim import corpora
import constants
# raw data
tle = transactions.TransLogExtractor(constants.RAW_DATA_DIR, constants.FEAT_DATA_DIR)
users_orders = tle.get_users_orders('prior')
%%time
def pad_tf(pad, users_orders):
pad_user = users_orders.groupby([pad])['user_id'].apply(utils.series_to_str) # convert into str
pad_user = [doc.split() for doc in pad_user.values] # split into arrays
dictionary = corpora.Dictionary(pad_user) # create dictionary
pad_term_matrix = [dictionary.doc2bow(doc) for doc in pad_user]
return dictionary, pad_term_matrix
for pad in ['product_id', 'aisle_id', 'department_id']:
p_dict, p_term_matrix = pad_tf(pad, users_orders)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_dict', 'wb') as f:
pickle.dump(p_dict, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_tf', 'wb') as f:
pickle.dump(p_term_matrix, f, pickle.HIGHEST_PROTOCOL)
pad = 'department_id'
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_tf', 'rb') as f:
p_term_matrix = pickle.load(f)
%%time
pad = 'product_id'
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_tf', 'rb') as f:
p_term_matrix = pickle.load(f)
for n in [10, 60, 110, 160, 210]:
print('number of topics: %d'%n)
lda = LdaMulticore.load(constants.LDA_DIR + 'p_gensim_lda_%d'%n)
print(lda.log_perplexity(p_term_matrix, total_docs=1677))
num_topic = 10
lda = LdaMulticore.load(constants.LDA_DIR + 'p_gensim_lda_%d'%num_topic)
pad = 'product_id'
pad_user = users_orders.groupby([pad])['user_id'].apply(utils.series_to_str)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_dict', 'rb') as f:
p_dict = pickle.load(f)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_tf', 'rb') as f:
p_term_matrix = pickle.load(f)
p_topics = [[v for k,v in lda.get_document_topics(p, minimum_phi_value=0, minimum_probability=0)] for p in p_term_matrix]
p_topics = pd.DataFrame(p_topics, columns = ['p_topic_%d'%n for n in range(num_topic)])
p_topics['product_id'] = pad_user.index.values
user_id = [int(token) for word_id, token in p_dict.iteritems()]
u_topics = [{k:v for k,v in lda.get_term_topics(word_id, minimum_probability=0)} for word_id, _ in p_dict.iteritems()]
u_topics = pd.DataFrame(u_topics).fillna(0)
u_topics.columns = ['u_topic_%d'%i for i in range(num_topic)]
u_topics = u_topics / u_topics.sum() # column normalization, make sure each topic sums to 1
u_topics = (u_topics.transpose() / u_topics.transpose().sum()).transpose() # row normalization
u_topics['user_id'] = user_id
with open(constants.FEAT_DATA_DIR + 'pad_p_topic.pkl', 'wb') as f:
pickle.dump(p_topics, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + 'pad_p_u_topic.pkl', 'wb') as f:
pickle.dump(u_topics, f, pickle.HIGHEST_PROTOCOL)
num_topic = 10
lda = LdaMulticore.load(constants.LDA_DIR + 'a_gensim_lda_%d'%num_topic)
pad = 'aisle_id'
pad_user = users_orders.groupby([pad])['user_id'].apply(utils.series_to_str)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_dict', 'rb') as f:
p_dict = pickle.load(f)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_tf', 'rb') as f:
p_term_matrix = pickle.load(f)
p_topics = [[v for k,v in lda.get_document_topics(p, minimum_phi_value=0, minimum_probability=0)] for p in p_term_matrix]
p_topics = pd.DataFrame(p_topics, columns = ['p_topic_%d'%n for n in range(num_topic)])
p_topics['aisle_id'] = pad_user.index.values
user_id = [int(token) for word_id, token in p_dict.iteritems()]
u_topics = [{k:v for k,v in lda.get_term_topics(word_id, minimum_probability=0)} for word_id, _ in p_dict.iteritems()]
u_topics = pd.DataFrame(u_topics).fillna(0)
u_topics.columns = ['u_topic_%d'%i for i in range(num_topic)]
u_topics = u_topics / u_topics.sum() # column normalization, make sure each topic sums to 1
u_topics = (u_topics.transpose() / u_topics.transpose().sum()).transpose() # row normalization
u_topics['user_id'] = user_id
with open(constants.FEAT_DATA_DIR + 'pad_a_topic.pkl', 'wb') as f:
pickle.dump(p_topics, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + 'pad_a_u_topic.pkl', 'wb') as f:
pickle.dump(u_topics, f, pickle.HIGHEST_PROTOCOL)
num_topic = 4
lda = LdaMulticore.load(constants.LDA_DIR + 'd_gensim_lda_%d'%num_topic)
pad = 'department_id'
pad_user = users_orders.groupby([pad])['user_id'].apply(utils.series_to_str)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_dict', 'rb') as f:
p_dict = pickle.load(f)
with open(constants.FEAT_DATA_DIR + pad[:-3] + '_gensim_tf', 'rb') as f:
p_term_matrix = pickle.load(f)
p_topics = [[v for k,v in lda.get_document_topics(p, minimum_phi_value=0, minimum_probability=0)] for p in p_term_matrix]
p_topics = pd.DataFrame(p_topics, columns = ['p_topic_%d'%n for n in range(num_topic)])
p_topics[pad] = pad_user.index.values
user_id = [int(token) for word_id, token in p_dict.iteritems()]
u_topics = [{k:v for k,v in lda.get_term_topics(word_id, minimum_probability=0)} for word_id, _ in p_dict.iteritems()]
u_topics = pd.DataFrame(u_topics).fillna(0)
# u_topics[4] = 0 # column 4 missing
u_topics.columns = ['u_topic_%d'%i for i in range(num_topic)]
u_topics = u_topics / u_topics.sum() # column normalization, make sure each topic sums to 1
u_topics = (u_topics.transpose() / u_topics.transpose().sum()).transpose() # row normalization
u_topics['user_id'] = user_id
with open(constants.FEAT_DATA_DIR + 'pad_d_topic.pkl', 'wb') as f:
pickle.dump(p_topics, f, pickle.HIGHEST_PROTOCOL)
with open(constants.FEAT_DATA_DIR + 'pad_d_u_topic.pkl', 'wb') as f:
pickle.dump(u_topics, f, pickle.HIGHEST_PROTOCOL)
ud = tle.craft_up_distance(filepath = ['pad_d_topic.pkl', 'pad_d_u_topic.pkl'], num_topic = 8, pad = 'department_id')
ud = tle.craft_up_distance(filepath = ['pad_a_topic.pkl', 'pad_a_u_topic.pkl'], num_topic = 10, pad = 'aisle_id')
ud = tle.craft_up_distance(filepath = ['pad_p_topic.pkl', 'pad_p_u_topic.pkl'], num_topic = 10, pad = 'product_id')
?? tle.craft_up_distance
Explanation: 方案2 <font color=red> Gensim </font>
per word bound
困惑度小,模型好,perplexity $2^{-bound}$
bound大,模型好
End of explanation
#方案3
order_product_names = pd.merge(order_products_all[['order_id', 'product_id']],
products[['product_id', 'product_name']],
on = ['product_id'],
how = 'left')
# 8mins
order_pnames_matrix = order_product_names.groupby(['order_id'])['product_name'].aggregate('sum')
order_pnames_matrix.head(5)
%%time
tf = CountVectorizer(analyzer = 'word', min_df=10, token_pattern='(?u)\\b[a-zA-Z]\\w+\\b')
tf_matrix = tf.fit_transform(order_pnames_matrix.values)
tf_feature_names = tf.get_feature_names()
'crisp' in tf_feature_names
tf_matrix.shape
Explanation: 方案3 需要处理无结构化的文本
End of explanation
with open(DATA_DIR + 'user_tf_matrix', 'rb') as f:
user_tf_matrix = pickle.load(f)
%%time
n_topics = [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
n_top_words = 10
scores = []
for n in n_topics:
print("number of topics:%d"%n)
with open(DATA_DIR + 'lda_%d.model'%n, 'rb') as f:
lda = pickle.load(f)
scores.append(lda.score(user_tf_matrix))
print("log likelihood:%f"%scores[-1])
#print_top_words(lda, tf_product_names, n_top_words)
max(scores)
Explanation: LDA Topic数目选择
number of topics:10
log likelihood:-482209266.283346
number of topics:20
log likelihood:-739555562.008604
number of topics:30
log likelihood:-1005367108.142659
number of topics:40
log likelihood:-1293773643.990032
number of topics:50
log likelihood:-1578475501.104584
number of topics:60
log likelihood:-1891792320.600158
number of topics:70
log likelihood:-2185429399.135661
number of topics:80
log likelihood:-2499275522.415354
number of topics:90
log likelihood:-2814907367.346162
number of topics:100
log likelihood:-3124264684.650005
End of explanation
with open(constants.LDA_DIR + 'lda_22.model', 'rb') as f:
lda = pickle.load(f)
with open(constants.FEAT_DATA_DIR + 'user_tf_matrix_info.pkl', 'rb') as f:
tf_info = pickle.load(f)
with open(constants.FEAT_DATA_DIR + 'user_tf_matrix', 'rb') as f:
user_tf_matrix = pickle.load(f)
def print_top_words(model, feature_names, n_top_words):
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
print("\n".join([feature_names[i]
for i in topic.argsort()[:-n_top_words - 1:-1]]))
print()
def csv_top_words(model, feature_names, n_top_words):
topic_content = {}
for topic_idx, topic in enumerate(model.components_):
print("Topic #%d:" % topic_idx)
content = [feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]]
# print("\n".join(content))
topic_content['topic_%s'%topic_idx] = content
res = pd.DataFrame(topic_content)
res.to_csv('lda_topic_content.csv')
print()
return res
lda.components_[0][:10:1]
print_top_words(lda, tf_info.pname, 20)
Explanation: <font color=red>Post-LDA Topic内容</font>
End of explanation
NUM_TOPIC = 22
tf_feature_names = tf_info.pid.values
with open(constants.LDA_DIR + 'lda_%d.model'%NUM_TOPIC, 'rb') as f:# replace 10
lda = pickle.load(f)
# 每个话题是一个商品上的分布, 除以topic的总比重得到每个单词对每个Topic的贡献比重
# np.newaxis turn (22,) into (22,1)
norm_comp = lda.components_ / lda.components_.sum(axis=1)[:, np.newaxis]
# 每个商品也可对应一个话题的分布,对每个单词的分布归一化使得概率和为1
norm_comp = norm_comp / norm_comp.sum(axis=0)
topic_product = pd.DataFrame(norm_comp.transpose(), columns = ["topic_%d"%x for x in range(NUM_TOPIC)])
topic_product['product_id'] = [int(x) for x in tf_feature_names]
user_topic = lda.transform(user_tf_matrix)
user_topic = pd.DataFrame(user_topic, columns = ["topic_%d"%x for x in range(NUM_TOPIC)])
user_id = users_products_matrix.index.values
user_topic['user_id'] = user_id
with open(DATA_DIR + 'user_topic_%d.pkl'%NUM_TOPIC, 'wb') as f:
pickle.dump(user_topic, f, pickle.HIGHEST_PROTOCOL)
with open(DATA_DIR + 'topic_product_%d.pkl'%NUM_TOPIC, 'wb') as f:
pickle.dump(topic_product, f, pickle.HIGHEST_PROTOCOL)
with open(DATA_DIR + 'user_topic_%d.pkl'%NUM_TOPIC, 'rb') as f:
user_topic = pickle.load(f)
with open(DATA_DIR + 'topic_product_%d.pkl'%NUM_TOPIC, 'rb') as f:
topic_product = pickle.load(f)
user_topic.head(5)
topic_product.describe()
Explanation: 方案2 每个用户的<font color=red>Embeded Topic Space</font>表达
用户对应一个Topic, 各个分量的scale都比较小, 是一个分布,l1范数和为1
商品对应一个Topic,各个分量的scale都很大,<font color=red>Normalize Components</font>
<font color=blue> option 1 </font>K_u NN 提供Candidate商品
欧氏距离
概率
<font color=blue> option 2 </font>embed vector直接作为特征
Time Weighted?
End of explanation
with open(DATA_DIR + 'user_product.pkl', 'rb') as f:
user_product = pickle.load(f)
with open(DATA_DIR + 'user_reorder_est.pkl', 'rb') as f:
avg_reorder_est = pickle.load(f)
train_orders = orders[orders.eval_set == 'train']
#%%time
u_nnp = []
cnt = 0
for u in train_orders.user_id:
cnt += 1
if cnt % 10000 == 0:
print("Nearest Product Search for %dth user"%cnt)
# extract user u's topic
u_topic = user_topic[user_topic.user_id == u][["topic_%d"%x for x in range(10)]]
# extract user u's product list
u_products = user_product[user_product.user_id == u]['product_id']
# extract avg_reorder_num
u_reorder = avg_reorder_est[avg_reorder_est.user_id == u]['avg_reorder_num']
# extract products' topic
p_topics = topic_product[topic_product.product_id.isin(set(u_products.values[0]))]
p_topics_pid = p_topics['product_id'].reset_index()['product_id']
p_topics_vec = p_topics[["topic_%d"%x for x in range(10)]]
# nbr search, expand search scope
n_neighbors = 1 * int(np.ceil(u_reorder.values[0]))
if n_neighbors > len(p_topics_vec):
n_neighbors = len(p_topics_vec)
if n_neighbors > 0:
nbrs = NearestNeighbors(n_neighbors=n_neighbors,
metric = 'l1',
algorithm = 'brute').fit(p_topics_vec.values)
#print(u)
distances, indices = nbrs.kneighbors(u_topic.values.reshape(1, -1))
#print(cnt)
u_nnp.append(list(p_topics_pid[indices[0]].values))
else:
u_nnp.append('None')
Explanation: 基于方案2的Nearest Neighbors Search
每个人一个NN,找到与user topic最近的商品
分析pred中None的数量
avg_reorder_len, component无normalize, 欧氏聚类时的得分:f1score:0.078288109250376284
avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.11
2 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.15
3 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.178
4 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.19
5 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.20
avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.11:0.107
norm, 2 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.146
norm, 3 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.168
norm, 4 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.18
norm, 5 * avg_reorder_len, component normalize,l1距离时的扥分:f1score:0.19
avg_reorder_len, component normalize,欧氏距离时的扥分:f1score:0.11
avg_reorder_len, component normalize, 对称KL距离时的得分: f1score:0.108
End of explanation
train_pred = pd.DataFrame({'user_id':train_orders.user_id,
'reorder_pids':u_nnp,
'order_id':train_orders.order_id})
train_gold = order_products_train[order_products_train.reordered == 1].groupby(['order_id'])['product_id'].apply(list)
train_gold = train_gold.reset_index()
train_eval = pd.merge(train_gold, train_pred, on = ['order_id'], how = 'outer').fillna('None')
train_eval.columns = ['order_id', 'gold_reorder', 'pred_reorder', 'user_id']
# 21mins
train_eval['f1score'] = train_eval.apply(wrap_cal_f1, axis = 1)
train_eval['precision'] = train_eval.apply(wrap_cal_precision, axis = 1)
train_eval['recall'] = train_eval.apply(wrap_cal_recall, axis = 1)
train_eval.f1score.mean()
train_eval.describe()
train_eval[train_eval.user_id == 201]
prior_all = pd.merge(order_products_prior, orders, on = ['order_id'], how = 'left')
prior_all = pd.merge(prior_all,
product_ad[['product_id', 'product_name', 'aisle', 'department']],
on = ['product_id'],
how = 'left')
Explanation: 方案2 结果评估
End of explanation
%%time
order_topic = lda_10.transform(order_tf_matrix)
with open(DATA_DIR + 'tf_matrix', 'rb') as f:
order_tf_matrix = pickle.load(f)
order_products_matrix = order_products_matrix.reset_index()
order_topic = pd.DataFrame(order_topic, columns = ["topic_%d"%x for x in range(10)])
Explanation: 方案1 每张订单的Embeded Topic Space表达
End of explanation |
9,046 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Temporal whitening with AR model
Here we fit an AR model to the data and use it
to temporally whiten the signals.
Step1: Plot the different time series and PSDs | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import fit_iir_model_raw
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'
raw = mne.io.read_raw_fif(raw_fname)
proj = mne.read_proj(proj_fname)
raw.add_proj(proj)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # mark bad channels
# Set up pick list: Gradiometers - bad channels
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')
order = 5 # define model order
picks = picks[:1]
# Estimate AR models on raw data
b, a = fit_iir_model_raw(raw, order=order, picks=picks, tmin=60, tmax=180)
d, times = raw[0, 10000:20000] # look at one channel from now on
d = d.ravel() # make flat vector
innovation = signal.convolve(d, a, 'valid')
d_ = signal.lfilter(b, a, innovation) # regenerate the signal
d_ = np.r_[d_[0] * np.ones(order), d_] # dummy samples to keep signal length
Explanation: Temporal whitening with AR model
Here we fit an AR model to the data and use it
to temporally whiten the signals.
End of explanation
plt.close('all')
plt.figure()
plt.plot(d[:100], label='signal')
plt.plot(d_[:100], label='regenerated signal')
plt.legend()
plt.figure()
plt.psd(d, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(innovation, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(d_, Fs=raw.info['sfreq'], NFFT=2048, linestyle='--')
plt.legend(('Signal', 'Innovation', 'Regenerated signal'))
plt.show()
Explanation: Plot the different time series and PSDs
End of explanation |
9,047 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:21
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
9,048 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Aufgabe 2
Step1: First we create a training set of size num_samples and num_features.
Step2: Next we run a performance test on the created data set. Therefor we train a random forest classifier multiple times and and measure the training time. Each time we use a different number of jobs to train the classifier. We repeat the process on training sets of various sizes.
Step3: Finally we plot and evaluate our results.
Step4: The training time is inversely proportional to the number of used cpu cores. | Python Code:
# imports
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier
import time
import matplotlib.pyplot as plt
import seaborn as sns
Explanation: Aufgabe 2: Classification
A short test to examine the performance gain when using multiple cores on sklearn's esemble classifier random forest.
Depending on the available system the maximum number of jobs to test and the sample size can be adjusted by changing the respective parameters.
End of explanation
num_samples = 500 * 1000
num_features = 40
X, y = make_classification(n_samples=num_samples, n_features=num_features)
Explanation: First we create a training set of size num_samples and num_features.
End of explanation
# test different number of cores: here max 8
max_cores = 8
num_cpu_list = list(range(1,max_cores + 1))
max_sample_list = [int(l * num_samples) for l in [0.1, 0.2, 1, 0.001]]
training_times_all = []
# the default setting for classifier
clf = RandomForestClassifier()
for max_sample in max_sample_list:
training_times = []
for num_cpu in num_cpu_list:
# change number of cores
clf.set_params(n_jobs=num_cpu)
# start_time = time.time()
# train classifier on training data
tr = %timeit -o clf.fit(X[:max_sample+1], y[:max_sample+1])
# save the runtime to the list
training_times.append(tr.best)
# print logging message
print("Computing for {} samples and {} cores DONE.".format(max_sample,num_cpu))
training_times_all.append(training_times)
print("All computations DONE.")
Explanation: Next we run a performance test on the created data set. Therefor we train a random forest classifier multiple times and and measure the training time. Each time we use a different number of jobs to train the classifier. We repeat the process on training sets of various sizes.
End of explanation
plt.plot(num_cpu_list, training_times_all[0], 'ro', label="{}k".format(max_sample_list[0]//1000))
plt.plot(num_cpu_list, training_times_all[1], "bs" , label="{}k".format(max_sample_list[1]//1000))
plt.plot(num_cpu_list, training_times_all[2], "g^" , label="{}k".format(max_sample_list[2]//1000))
plt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[2])+1])
plt.title("Training time vs #CPU Cores")
plt.xlabel("#CPU Cores")
plt.ylabel("training time [s]")
plt.legend()
plt.show()
Explanation: Finally we plot and evaluate our results.
End of explanation
plt.plot(num_cpu_list, training_times_all[3], 'ro', label="{}k".format(max_sample_list[3]/1000))
plt.axis([0, len(num_cpu_list)+1, 0, max(training_times_all[3])+1])
plt.title("Training time vs #CPU Cores on small dataset")
plt.xlabel("#CPU Cores")
plt.ylabel("training time [s]")
plt.legend()
plt.show()
Explanation: The training time is inversely proportional to the number of used cpu cores.
End of explanation |
9,049 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text classification from scratch
Authors
Step1: Load the data
Step2: The aclImdb folder contains a train and test subfolder
Step3: The aclImdb/train/pos and aclImdb/train/neg folders contain text files, each of
which represents one review (either positive or negative)
Step4: We are only interested in the pos and neg subfolders, so let's delete the rest
Step5: You can use the utility tf.keras.preprocessing.text_dataset_from_directory to
generate a labeled tf.data.Dataset object from a set of text files on disk filed
into class-specific folders.
Let's use it to generate the training, validation, and test datasets. The validation
and training datasets are generated from two subsets of the train directory, with 20%
of samples going to the validation dataset and 80% going to the training dataset.
Having a validation dataset in addition to the test dataset is useful for tuning
hyperparameters, such as the model architecture, for which the test dataset should not
be used.
Before putting the model out into the real world however, it should be retrained using all
available training data (without creating a validation dataset), so its performance is maximized.
When using the validation_split & subset arguments, make sure to either specify a
random seed, or to pass shuffle=False, so that the validation & training splits you
get have no overlap.
Step6: Let's preview a few samples
Step7: Prepare the data
In particular, we remove <br /> tags.
Step8: Two options to vectorize the data
There are 2 ways we can use our text vectorization layer
Step9: Build a model
We choose a simple 1D convnet starting with an Embedding layer.
Step10: Train the model
Step11: Evaluate the model on the test set
Step12: Make an end-to-end model
If you want to obtain a model capable of processing raw strings, you can simply
create a new model (using the weights we just trained) | Python Code:
import tensorflow as tf
import numpy as np
Explanation: Text classification from scratch
Authors: Mark Omernick, Francois Chollet<br>
Date created: 2019/11/06<br>
Last modified: 2020/05/17<br>
Description: Text sentiment classification starting from raw text files.
Introduction
This example shows how to do text classification starting from raw text (as
a set of text files on disk). We demonstrate the workflow on the IMDB sentiment
classification dataset (unprocessed version). We use the TextVectorization layer for
word splitting & indexing.
Setup
End of explanation
!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
Explanation: Load the data: IMDB movie review sentiment classification
Let's download the data and inspect its structure.
End of explanation
!ls aclImdb
!ls aclImdb/test
!ls aclImdb/train
Explanation: The aclImdb folder contains a train and test subfolder:
End of explanation
!cat aclImdb/train/pos/6248_7.txt
Explanation: The aclImdb/train/pos and aclImdb/train/neg folders contain text files, each of
which represents one review (either positive or negative):
End of explanation
!rm -r aclImdb/train/unsup
Explanation: We are only interested in the pos and neg subfolders, so let's delete the rest:
End of explanation
batch_size = 32
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/train",
batch_size=batch_size,
validation_split=0.2,
subset="training",
seed=1337,
)
raw_val_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/train",
batch_size=batch_size,
validation_split=0.2,
subset="validation",
seed=1337,
)
raw_test_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/test", batch_size=batch_size
)
print(f"Number of batches in raw_train_ds: {raw_train_ds.cardinality()}")
print(f"Number of batches in raw_val_ds: {raw_val_ds.cardinality()}")
print(f"Number of batches in raw_test_ds: {raw_test_ds.cardinality()}")
Explanation: You can use the utility tf.keras.preprocessing.text_dataset_from_directory to
generate a labeled tf.data.Dataset object from a set of text files on disk filed
into class-specific folders.
Let's use it to generate the training, validation, and test datasets. The validation
and training datasets are generated from two subsets of the train directory, with 20%
of samples going to the validation dataset and 80% going to the training dataset.
Having a validation dataset in addition to the test dataset is useful for tuning
hyperparameters, such as the model architecture, for which the test dataset should not
be used.
Before putting the model out into the real world however, it should be retrained using all
available training data (without creating a validation dataset), so its performance is maximized.
When using the validation_split & subset arguments, make sure to either specify a
random seed, or to pass shuffle=False, so that the validation & training splits you
get have no overlap.
End of explanation
# It's important to take a look at your raw data to ensure your normalization
# and tokenization will work as expected. We can do that by taking a few
# examples from the training set and looking at them.
# This is one of the places where eager execution shines:
# we can just evaluate these tensors using .numpy()
# instead of needing to evaluate them in a Session/Graph context.
for text_batch, label_batch in raw_train_ds.take(1):
for i in range(5):
print(text_batch.numpy()[i])
print(label_batch.numpy()[i])
Explanation: Let's preview a few samples:
End of explanation
from tensorflow.keras.layers import TextVectorization
import string
import re
# Having looked at our data above, we see that the raw text contains HTML break
# tags of the form '<br />'. These tags will not be removed by the default
# standardizer (which doesn't strip HTML). Because of this, we will need to
# create a custom standardization function.
def custom_standardization(input_data):
lowercase = tf.strings.lower(input_data)
stripped_html = tf.strings.regex_replace(lowercase, "<br />", " ")
return tf.strings.regex_replace(
stripped_html, f"[{re.escape(string.punctuation)}]", ""
)
# Model constants.
max_features = 20000
embedding_dim = 128
sequence_length = 500
# Now that we have our custom standardization, we can instantiate our text
# vectorization layer. We are using this layer to normalize, split, and map
# strings to integers, so we set our 'output_mode' to 'int'.
# Note that we're using the default split function,
# and the custom standardization defined above.
# We also set an explicit maximum sequence length, since the CNNs later in our
# model won't support ragged sequences.
vectorize_layer = TextVectorization(
standardize=custom_standardization,
max_tokens=max_features,
output_mode="int",
output_sequence_length=sequence_length,
)
# Now that the vocab layer has been created, call `adapt` on a text-only
# dataset to create the vocabulary. You don't have to batch, but for very large
# datasets this means you're not keeping spare copies of the dataset in memory.
# Let's make a text-only dataset (no labels):
text_ds = raw_train_ds.map(lambda x, y: x)
# Let's call `adapt`:
vectorize_layer.adapt(text_ds)
Explanation: Prepare the data
In particular, we remove <br /> tags.
End of explanation
def vectorize_text(text, label):
text = tf.expand_dims(text, -1)
return vectorize_layer(text), label
# Vectorize the data.
train_ds = raw_train_ds.map(vectorize_text)
val_ds = raw_val_ds.map(vectorize_text)
test_ds = raw_test_ds.map(vectorize_text)
# Do async prefetching / buffering of the data for best performance on GPU.
train_ds = train_ds.cache().prefetch(buffer_size=10)
val_ds = val_ds.cache().prefetch(buffer_size=10)
test_ds = test_ds.cache().prefetch(buffer_size=10)
Explanation: Two options to vectorize the data
There are 2 ways we can use our text vectorization layer:
Option 1: Make it part of the model, so as to obtain a model that processes raw
strings, like this:
python
text_input = tf.keras.Input(shape=(1,), dtype=tf.string, name='text')
x = vectorize_layer(text_input)
x = layers.Embedding(max_features + 1, embedding_dim)(x)
...
Option 2: Apply it to the text dataset to obtain a dataset of word indices, then
feed it into a model that expects integer sequences as inputs.
An important difference between the two is that option 2 enables you to do
asynchronous CPU processing and buffering of your data when training on GPU.
So if you're training the model on GPU, you probably want to go with this option to get
the best performance. This is what we will do below.
If we were to export our model to production, we'd ship a model that accepts raw
strings as input, like in the code snippet for option 1 above. This can be done after
training. We do this in the last section.
End of explanation
from tensorflow.keras import layers
# A integer input for vocab indices.
inputs = tf.keras.Input(shape=(None,), dtype="int64")
# Next, we add a layer to map those vocab indices into a space of dimensionality
# 'embedding_dim'.
x = layers.Embedding(max_features, embedding_dim)(inputs)
x = layers.Dropout(0.5)(x)
# Conv1D + global max pooling
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.Conv1D(128, 7, padding="valid", activation="relu", strides=3)(x)
x = layers.GlobalMaxPooling1D()(x)
# We add a vanilla hidden layer:
x = layers.Dense(128, activation="relu")(x)
x = layers.Dropout(0.5)(x)
# We project onto a single unit output layer, and squash it with a sigmoid:
predictions = layers.Dense(1, activation="sigmoid", name="predictions")(x)
model = tf.keras.Model(inputs, predictions)
# Compile the model with binary crossentropy loss and an adam optimizer.
model.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
Explanation: Build a model
We choose a simple 1D convnet starting with an Embedding layer.
End of explanation
epochs = 3
# Fit the model using the train and test datasets.
model.fit(train_ds, validation_data=val_ds, epochs=epochs)
Explanation: Train the model
End of explanation
model.evaluate(test_ds)
Explanation: Evaluate the model on the test set
End of explanation
# A string input
inputs = tf.keras.Input(shape=(1,), dtype="string")
# Turn strings into vocab indices
indices = vectorize_layer(inputs)
# Turn vocab indices into predictions
outputs = model(indices)
# Our end to end model
end_to_end_model = tf.keras.Model(inputs, outputs)
end_to_end_model.compile(
loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]
)
# Test it with `raw_test_ds`, which yields raw strings
end_to_end_model.evaluate(raw_test_ds)
Explanation: Make an end-to-end model
If you want to obtain a model capable of processing raw strings, you can simply
create a new model (using the weights we just trained):
End of explanation |
9,050 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Once we've trained a model, we might want to better understand what sequence motifs the first convolutional layer has discovered and how it's using them. Basset offers two methods to help users explore these filters.
You'll want to double check that you have the Tomtom motif comparison tool for the MEME suite installed. Tomtom provides rigorous methods to compare the filters to a database of motifs. You can download it from here
Step1: First, we'll run basset_motifs.py, which will extract a bunch of basic information about the first layer filters. The script takes an HDF file (such as any preprocessed with preprocess_features.py) and samples sequences from the test set. It sends those sequences through the neural network and examines its hidden unit values to describe what they're doing.
-m specifies the Tomtom motif database
Step2: Now there's plenty of information output in motifs_out. My favorite way to get started is to open the HTML file output by Tomtom's comparison of the motifs to a database. It displays all of the motifs and their database matches in a neat table.
Before we take a look though, let me describe where these position weight matrices came from. Inside the neural network, the filters are reprsented by real-valued matrices. Here's one
Step3: Although it's matrix of values, this doesn't quite match up with the conventional notion of a position weight matrix that we typically use to represent sequences motifs in genome biology. To make that, basset_motifs.py pulls out the underlying sequences that activate the filters in the test sequences and passes that to weblogo.
Step4: The Tomtom output will annotate the filters, but it won't tell you how the model is using them. So alongside it, let's print out a table of the filters with the greatest output standard deviation over this test set. Variance correlates strongly with how influential the filter is for making predictions.
The columns here are
Step5: As I discuss in the paper, unannotated low complexity filters tend to rise to the top here because low order sequence complexity influence accessibility.
The Tomtom output HTML is here
Step6: The other primary tool that we have to understand the filters is to remove the filter from the model and assess the impact. Rather than truly remove it and re-train, we can just nullify it within the model by setting all output from the filter to its mean. This way the model around it isn't drastically affected, but there's no information flowing through.
This analysis requires considerably more compute time, so I separated it into a different script. To give it a sufficient number of sequences to obtain a good estimate influence, I typically run it overnight. If your computer is using too much memory, decrease the batch size. I'm going to run here with 1,000 sequences, but I used 20,000 for the paper.
To get really useful output, the script needs a few additional pieces of information | Python Code:
model_file = '../data/models/pretrained_model.th'
seqs_file = '../data/encode_roadmap.h5'
Explanation: Once we've trained a model, we might want to better understand what sequence motifs the first convolutional layer has discovered and how it's using them. Basset offers two methods to help users explore these filters.
You'll want to double check that you have the Tomtom motif comparison tool for the MEME suite installed. Tomtom provides rigorous methods to compare the filters to a database of motifs. You can download it from here: http://meme-suite.org/doc/download.html
To run this tutorial, you'll need to either download the pre-trained model from https://www.dropbox.com/s/rguytuztemctkf8/pretrained_model.th.gz and preprocess the consortium data, or just substitute your own files here:
End of explanation
import subprocess
cmd = 'basset_motifs.py -s 1000 -t -o motifs_out %s %s' % (model_file, seqs_file)
subprocess.call(cmd, shell=True)
Explanation: First, we'll run basset_motifs.py, which will extract a bunch of basic information about the first layer filters. The script takes an HDF file (such as any preprocessed with preprocess_features.py) and samples sequences from the test set. It sends those sequences through the neural network and examines its hidden unit values to describe what they're doing.
-m specifies the Tomtom motif database: CIS-BP Homo sapiens database by default.
-s specifies the number of sequences to sample. 1000 is fast and sufficient.
-t asks the script to trim uninformative positions off the filter ends.
End of explanation
# actual file is motifs_out/filter9_heat.pdf
from IPython.display import Image
Image(filename='motifs_eg/filter9_heat.png')
Explanation: Now there's plenty of information output in motifs_out. My favorite way to get started is to open the HTML file output by Tomtom's comparison of the motifs to a database. It displays all of the motifs and their database matches in a neat table.
Before we take a look though, let me describe where these position weight matrices came from. Inside the neural network, the filters are reprsented by real-valued matrices. Here's one:
End of explanation
# actual file is motifs_out/filter9_logo.eps
Image(filename='motifs_eg/filter9_logo.png')
Explanation: Although it's matrix of values, this doesn't quite match up with the conventional notion of a position weight matrix that we typically use to represent sequences motifs in genome biology. To make that, basset_motifs.py pulls out the underlying sequences that activate the filters in the test sequences and passes that to weblogo.
End of explanation
!sort -k6 -gr motifs_out/table.txt | head -n20
Explanation: The Tomtom output will annotate the filters, but it won't tell you how the model is using them. So alongside it, let's print out a table of the filters with the greatest output standard deviation over this test set. Variance correlates strongly with how influential the filter is for making predictions.
The columns here are:
1. Index
2. Optimal sequence
3. Tomtom annotation
4. Information content
5. Activation mean
6. Activation standard deviaion
End of explanation
!open motifs_out/tomtom/tomtom.html
Explanation: As I discuss in the paper, unannotated low complexity filters tend to rise to the top here because low order sequence complexity influence accessibility.
The Tomtom output HTML is here:
End of explanation
cmd = 'basset_motifs_infl.py'
cmd += ' -m motifs_out/table.txt'
cmd += ' -s 2000 -b 500'
cmd += ' -o infl_out'
cmd += ' --subset motifs_eg/primary_cells.txt'
cmd += ' -t motifs_eg/cell_activity.txt'
cmd += ' --width 7 --height 40 --font 0.5'
cmd += ' %s %s' % (model_file, seqs_file)
subprocess.call(cmd, shell=True)
Explanation: The other primary tool that we have to understand the filters is to remove the filter from the model and assess the impact. Rather than truly remove it and re-train, we can just nullify it within the model by setting all output from the filter to its mean. This way the model around it isn't drastically affected, but there's no information flowing through.
This analysis requires considerably more compute time, so I separated it into a different script. To give it a sufficient number of sequences to obtain a good estimate influence, I typically run it overnight. If your computer is using too much memory, decrease the batch size. I'm going to run here with 1,000 sequences, but I used 20,000 for the paper.
To get really useful output, the script needs a few additional pieces of information:
* -m specifies the table created by basset_motifs.py above.
* -s samples 2000 sequences
* -b sets the batch_size to 500
* -o is the output directory
* --subset limits the cells displayed to those listed in the file.
* -t specifies a table where the second column is the target labels.
* --weight, --height, --font adjust the heatmap
End of explanation |
9,051 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-2', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: EC-EARTH-CONSORTIUM
Source ID: SANDBOX-2
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:59
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation |
9,052 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
编码分类特征
在机器学习中,特征经常不是数值型的而是枚举型的.举个例子,一个人可能有 ["male", "female"],["from Europe", "from US", "from Asia"],["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]等枚举类型的特征.这些特征能够被有效地编码成整数,比如["male", "from US", "uses Internet Explorer"]可以被表示为[0, 1, 3],["female", "from Asia", "uses Chrome"]表示为[1, 2, 1].
这个的整数特征表示并不能在scikit-learn的估计器中直接使用,因为这样的连续输入,估计器会认为类别之间是有序的,但实际却是无序的.
一种将分类特征转换为能够被scikit-learn中模型使用的编码是one-of-K或one-hot编码,在OneHotEncoder中实现.这个类使用m个可能值转换为m值化特征,将分类特征的每个元素转化为一个值.
sklearn中相关的接口有
Step1: 枚举型特征编码
针对int类型
Step2: 针对str类型
Step3: 二值化特征
针对单一枚举类型
Step4: 针对多枚举类型 | Python Code:
from sklearn import preprocessing
enc = preprocessing.OneHotEncoder()
enc.fit([[0, 0, 3], [1, 1, 0], [0, 2, 1], [1, 0, 2]])
enc.transform([[0, 1, 3]]).toarray()
Explanation: 编码分类特征
在机器学习中,特征经常不是数值型的而是枚举型的.举个例子,一个人可能有 ["male", "female"],["from Europe", "from US", "from Asia"],["uses Firefox", "uses Chrome", "uses Safari", "uses Internet Explorer"]等枚举类型的特征.这些特征能够被有效地编码成整数,比如["male", "from US", "uses Internet Explorer"]可以被表示为[0, 1, 3],["female", "from Asia", "uses Chrome"]表示为[1, 2, 1].
这个的整数特征表示并不能在scikit-learn的估计器中直接使用,因为这样的连续输入,估计器会认为类别之间是有序的,但实际却是无序的.
一种将分类特征转换为能够被scikit-learn中模型使用的编码是one-of-K或one-hot编码,在OneHotEncoder中实现.这个类使用m个可能值转换为m值化特征,将分类特征的每个元素转化为一个值.
sklearn中相关的接口有:
接口|说明
---|---
preprocessing.OneHotEncoder([n_values, …])|独热编码,要求训练和转换的参数为int型,输出的为
preprocessing.LabelBinarizer([neg_label, …])|将枚举型特征转换为二值化特征矩阵,输入为int型或者字符串型
preprocessing.MultiLabelBinarizer([classes, …])|将复数枚举型特征转换为二值化特征矩阵,输入为int型或者字符串型组成的set
preprocessing.LabelEncoder|将枚举型特征编码,输入为int型或者字符串型
独热编码
End of explanation
le = preprocessing.LabelEncoder()
le.fit([1, 2, 2, 6])
le.classes_
le.transform([1, 1, 2, 6])# 编码
le.inverse_transform([0, 0, 1, 2]) # 解码
Explanation: 枚举型特征编码
针对int类型
End of explanation
le = preprocessing.LabelEncoder()
le.fit(["paris", "paris", "tokyo", "amsterdam"])
list(le.classes_)
le.transform(["tokyo", "tokyo", "paris"])
list(le.inverse_transform([2, 2, 1]))
Explanation: 针对str类型
End of explanation
lb = preprocessing.LabelBinarizer()
lb.fit([1, 2, 6, 4, 2])
lb.classes_
lb.transform([1, 6])
Explanation: 二值化特征
针对单一枚举类型
End of explanation
lb = preprocessing.MultiLabelBinarizer()
lb.fit_transform([(1, 2), (3,)])
lb.classes_
Explanation: 针对多枚举类型
End of explanation |
9,053 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bean Stalk Series
Step1: Now lets import the tetravolume.py module, which in turn has dependencies, to get these volumes directly, based on edge lengths. I'll use the edges given in Fig. 986.411 of <i>Synergetics</i>, spoking out from the point C at the center of any RT diamond, and/or values computed by David Koski.
First, lets get a color coded version of the E module...
<br />
<br />
<div style="text-align
Step2: Lets start with a Pentagonal Dodecahedron and build it from Es + e3s.
Step3: RT3, on the other hand, has a volume we may express as
Step4: Recall RT3 is the Rhombic Triacontahedron we get by intersecting the two Platonic duals
Step5: As you can see, the relationship holds, though floating point numbers add some noise.
In addition to edge lengths, we have a succinct way to express the angles of this LCD triangle in terms of ø, thanks to David Koski. | Python Code:
import gmpy2
from gmpy2 import sqrt as rt2
from gmpy2 import mpfr
gmpy2.get_context().precision=200
root2 = rt2(mpfr(2))
root3 = rt2(mpfr(3))
root5 = rt2(mpfr(5))
ø = (root5 + 1)/2
ø_down = ø ** -1
ø_up = ø
E_vol = (15 * root2 * ø_down ** 3)/120 # a little more than 1/24, volume of T module
print(E_vol)
Explanation: Bean Stalk Series: Dissecting the E & E3
by D. Koski & K. Urner, May 2018 (version 1.2), last modified Feb 13, 2019
<br />
<br />
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/37897977156" title="E mod (right tetrahedron) with submodules: Fum, Fo, Fi, Fe going left to right."><img src="https://farm5.staticflickr.com/4444/37897977156_630bb41944_z.jpg" width="640" height="573" alt="E mod (right tetrahedron) with submodules: Fum, Fo, Fi, Fe going left to right."></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">
<b>Figure 1: E3 module dissected into Fum, Fo, Fi and Fe</b>
</div>
</div>
What we see here is another vZome construction by David Koski, showing an E3 module dissected into four sub-modules, according to how the great circles of the 120 LCD Triangles cross the RT as chords. Compare with Figure 986.561 in Synergetics.
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/27606174059/" title="Excerpt from Fig. 986.561, Synergetics 2"><img src="https://farm5.staticflickr.com/4589/27606174059_66f9f02fe8.jpg" width="474" height="390" alt="Excerpt from Fig. 986.561, Synergetics 2"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
</div>
What are their volumes?
Lets start with E itself.
End of explanation
# Edges needed for Fum and Emod
e0 = Black_Yellow = root3 * ø_down
e1 = Black_Blue = mpfr(1) # raddfius of RT = 1 (same as unit-radius sphere)
e2 = Black_Orange = 1/(rt2(ø**2+1)/2)
e3 = Yellow_Blue = (3 - root5)/2
e4 = Blue_Orange = (ø**-1)*(1/rt2(ø**2+1))
e5 = Orange_Yellow = rt2(Yellow_Blue**2 - Blue_Orange**2)
e6 = Black_Red = rt2((5 - root5)/2)
e7 = Blue_Red = 1/ø
e8 = Red_Yellow = rt2(5 - 2 * root5)
#print(e3 ** 2 + e7 ** 2)
#print(e8 ** 2)
#assert e3 ** 2 + e7 ** 2 == e8 ** 2 # check
#assert e4 ** 2 + e5 ** 2 == e3 ** 2 # check
# not needed for this computation
e9 = Black_Green = 20/(5 * root2 * ø**2) # Sfactor
e10 = Purple_Green = ø ** -4
for e in range(11):
val = "e" + str(e)
length = eval(val)
print("Edge {:3} = {:40.37}".format(val, length))
import tetravolume as tv # has to be in your path, stored on Github with this JN
# D = 1 in this module, so final volume need to be divided by 8 to match R=1 (D=2)
# see Fig. 986.411A in Synergetics
Fum_vol = tv.Tetrahedron(e0,e1,e2,e3,e4,e5).ivm_volume()/8
E_vol = tv.Tetrahedron(e1,e0,e6,e3,e8,e7).ivm_volume()/8
print("Fum volume (in tetravolumes): {:40.38}".format( Fum_vol ))
print("E volume (in tetravolumes) : {:40.38}".format( E_vol ))
Fe = (ø**-7) * (rt2(2)/8)
Fi = (ø**-6) * (rt2(2)/8)
Fo = ((5-rt2(5))/5) * (ø**-4) * (rt2(2)/8)
Fum = (rt2(5)/5) * (ø**-4)*(rt2(2)/8)
Fe_Fi = (ø**-5) * (rt2(2)/8)
Fo_Fum = (ø**-4) * (rt2(2)/8)
print("Fe: {:40.38}".format(Fe))
print("Fi: {:40.38}".format(Fi))
print("Fo: {:40.38}".format(Fo))
print("Fum: {:40.38}".format(Fum))
print("E_vol: {:40.38}".format((Fe_Fi) + (Fo_Fum)))
print("E_vol: {:40.38}".format((ø**-3)*(rt2(2)/8)))
Explanation: Now lets import the tetravolume.py module, which in turn has dependencies, to get these volumes directly, based on edge lengths. I'll use the edges given in Fig. 986.411 of <i>Synergetics</i>, spoking out from the point C at the center of any RT diamond, and/or values computed by David Koski.
First, lets get a color coded version of the E module...
<br />
<br />
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/23979124977/in/dateposted-public/" title="LCD Triangles on E mod RT surface"><img src="https://farm5.staticflickr.com/4573/23979124977_506932443c_z.jpg" width="640" height="399" alt="LCD Triangles on E mod RT surface"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div align="center">
<b>Figure 2: Dissected E-mod with color-coded vertexes (Koski with vZome)</b>
</div>
</div>
<br />
The black hub is at the center of the RT, as shown here...
<br />
<div style="text-align: center">
<a data-flickr-embed="true" href="https://www.flickr.com/photos/kirbyurner/24971714468/in/dateposted-public/" title="E module with origin"><img src="https://farm5.staticflickr.com/4516/24971714468_46e14ce4b5_z.jpg" width="640" height="399" alt="E module with origin"></a><script async src="//embedr.flickr.com/assets/client-code.js" charset="utf-8"></script>
<div style="text-align: center">
<b>Figure 3: RT center is the black hub (Koski with vZome)</b>
</div>
</div>
<br />
The edges of the Fum module, tetrahedron Black-Orange-Yellow-Blue will be... (note R=1, D=2):
End of explanation
PD = 3 * root2 * (ø ** 2 + 1)
print(PD)
E = e = E_vol # shorthand (E3 = E * ø_up ** 3, e3 = E * ø_down ** 3, E = e)
e3 = e * ø_down ** 3
PD = 348 * E + 84 * e3
print(PD)
Explanation: Lets start with a Pentagonal Dodecahedron and build it from Es + e3s.
End of explanation
RT3 = 480 * E + 120 * e3 # e3 is e * ø_down ** 3 (e = E)
print(RT3)
Explanation: RT3, on the other hand, has a volume we may express as:
End of explanation
E3 = E_vol * ø_up ** 3
Fum3 = Fum_vol * ø_up ** 3
print(E3)
print(Fum3)
print(RT3 - PD)
print(120 * Fum3)
Explanation: Recall RT3 is the Rhombic Triacontahedron we get by intersecting the two Platonic duals: Icosahedron (Icosa) and Pentagonal Dodecahedron (PD), with the former having edges = 2R and volume ~18.51 (5 * rt2(2) * ø ** 2).
It turns out that if we shave a Fum3 off an E3, and multiply by 120, we get the PD's volume.
Put another way: RT3 - PD leaves a volume of 120 Fum3 volumes. In other words 120 * (E3 - Fum3) = PD.
End of explanation
from math import atan, sqrt as rt2, degrees
Ø = (1 + rt2(5))/2 # back to floating point
print(degrees(atan(Ø**-2)/2)) # 10.812316º
print(degrees(atan(Ø**-3))) # 13.282525º
print(degrees(atan(Ø**-2))) # 20.905157º
print(degrees(atan(Ø**-1))) # 31.717474º
print(degrees(atan(2*Ø**-2))) # 37.377368º
print(atan(Ø ** -1) + atan(Ø ** -3))
print(atan(1)) # arctan 1 = 45º
print(2 * atan(Ø**-1))
print(atan(2)) # 63.434948º
print(degrees(atan(2))) # 63.434948º
print( atan(Ø**-1) + 3 * atan(Ø**-3) )
print(atan(3)) # 71.565051º
print(degrees(atan(3))) # 71.565051º
Explanation: As you can see, the relationship holds, though floating point numbers add some noise.
In addition to edge lengths, we have a succinct way to express the angles of this LCD triangle in terms of ø, thanks to David Koski.
End of explanation |
9,054 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: <figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
IHE Python course, 2017
CHIRPS data for precipitation (worldwide between 50S and 50N latitude)
Never again find yourself without appropriate precipitation data
Thisi is what we've leaned from the presentation by Tim Hessels on March 14.
To put this in practice, we'll download precipitaion data for a groundwater model in Morocco in the Tafilat area near Erfoud (find it in GoogleMaps.
Step2: CHIRPS (Climate Hazards group Infrared Precipitation with Stations
Download the data files for the desired periods for the whole of Africa from CHIRPS. You can do this with FileZilla a free app for this purpose.
For access to the CHIRPTS data see
http
Step3: gdal (working with tiff files among others, GIS)
import gdal and check if the file is present by opening it.
Step4: Get some basic information from the tiff file
Ok, now with g the successfully opended CHIRPS file, get some basic information from that file.
Step5: This projection says that it's WGS1984 (same as GoogleEarth and GoogleMaps. Therefore it is in longitude (x) and latitute (y) coordinates. This allows to immediately compute the WGS coordinates (lat/lon) from it, for instance for each pixel/cell center. It's also straightforward to compute the bounding box of this array and plot it in QGIS for instance
Step6: Generate a shapefile with a polyline that represents the model boundary
The contour coordinates of the Erfoud/Tafilalet groundwater model happen to be the file ErfoudModelContour.kml. Kml files come from GoogleEarth and are in WGS84 coordinates. It was obtained by digitizing the line directly in Google Earth.
We extract the coordinates from that HTML file and put them in a list of lists, the form needed to inject the coordinates into the shapefile.
Extraction can be done in several ways, for instance with one of the HTML parsers that are available on the internet. However if you look at this file in the internet, it's clear that we may do this in a simple way. Read the file line by line until we find the word "coordinates". Then read the next line, which contains all the coordinates. Then clean that line form tabs, put a comma between each tripple of coordinate values and turn it into a list of lists with each list the x, y an z values of one vertice of the model boundary
Step7: Generate the shapefile holding a 3 polygons a) The bonding box around the data in the tiff file, b) the bounding box of around the model contour. 3) the model contour
Step8: Show shapefile in QGIS
Fire up QGIS and load the shape file. Set its CRS to WGS84 (same coordinates as GoogleMaps, most general LatLon)
Here are the pictures taken van de screen of QGIS after the shapefile was loaded and the label under properties was set tot transparent with solid contour line.
To get the GoogleMaps image, look for it it under web in the main menu.
The first image is zoomed out, so that the location of the model can be seen in the south east of this image. It's in Morocco.
<figure>
<IMG SRC="./EfoudModelContour2.png" WIDTH=750 ALIGN="center">
</figure>
The more detailed image shows the contour of the model and its bounding box. It proves that it works.
<figure>
<IMG SRC="./EfoudModelContour1.png" WIDTH=750 ALIGN="center">
</figure>
The next step is to select the appropriage precipitation data from the CHIRPS file.
Get the precipitation data from the CHIRPS tiff file
The actual data are stored in rasterbands. We saw from the size above, that this file has only one rasterband. Rasterband information is obtained one band at a time. So here we pass band number 1.
Step10: Select a subarea equal to the bbox of the model contour.
Step11: Read the data again, but now only the part that covers the model in Marocco
Step12: Just for curiosity, show the size of the area covered and the size resolution of the precipitation data. | Python Code:
import numpy as np
from pprint import pprint
def prar(A, ncol=8, maxsize=1000):
prints 2D arrays the Matlab 2 (more readable)
if A.size>1000: # don't try to print a million values, or your pc will hang.
print(A)
return
n = A.shape[1]
# print columns in formatted chunks that fit on on line
for i, Asub in enumerate(np.split(A, range(ncol, n, ncol), axis=1)):
if Asub.size == 0: Asub=A
print("columns[{}:{}]".format(i * ncol, i * ncol +Asub.shape[1]))
for L in Asub:
print((" {:10.5g}" * len(L)).format(*L))
print()
Explanation: <figure>
<IMG SRC="../../logo/logo.png" WIDTH=250 ALIGN="right">
</figure>
IHE Python course, 2017
CHIRPS data for precipitation (worldwide between 50S and 50N latitude)
Never again find yourself without appropriate precipitation data
Thisi is what we've leaned from the presentation by Tim Hessels on March 14.
To put this in practice, we'll download precipitaion data for a groundwater model in Morocco in the Tafilat area near Erfoud (find it in GoogleMaps.
End of explanation
import glob
chirps_files = glob.glob('../**/*/*.tif')
pprint(chirps_files)
fname = chirps_files[0]
Explanation: CHIRPS (Climate Hazards group Infrared Precipitation with Stations
Download the data files for the desired periods for the whole of Africa from CHIRPS. You can do this with FileZilla a free app for this purpose.
For access to the CHIRPTS data see
http://chg.ucsb.edu/data/chirps/
Next to tiff files one can find png (images) on the sight that can be directly viewed in your browser or imported into any application, whithout any processing. But of course a pictures does not have the original data.
glob (unix-like file handling for python)
Assuming that you have downloaded some files, use glob to get a list of them on your computer.
End of explanation
import gdal
try: # is the file present?
g = gdal.Open(fname)
except:
exception(FileExistsError("Can't open file <{}>".fname))
Explanation: gdal (working with tiff files among others, GIS)
import gdal and check if the file is present by opening it.
End of explanation
print("\nBasic information on file <{}>\n".format(fname))
print("Driver: ", g.GetDriver().ShortName, '/', g.GetDriver().LongName)
print("Size : ", g.RasterXSize, 'x', g.RasterYSize, 'x', g.RasterCount)
print("Projection :\n", g.GetProjection())
print()
print("\nGeotransform information:\n")
gt = g.GetGeoTransform()
print("Geotransform :", gt)
# assign the individual fields to more recognizable variables
xUL, dx, xRot, yUL, yRot, dy = gt
# get the size of the data and the number of bands in the tiff file (is 1)
Nx, Ny, Nband = g.RasterXSize, g.RasterYSize, g.RasterCount
# show what we've got:
print('Nx = {}\nNy = {}\nxUL = {}\nyUL = {}\ndx = {}\ndy = {} <--- Negative !'.format(Nx, Ny, xUL, yUL, dx, dy))
Explanation: Get some basic information from the tiff file
Ok, now with g the successfully opended CHIRPS file, get some basic information from that file.
End of explanation
# Bounding box around the tiff data set
tbb = [xUL, yUL + Ny * dy, xUL + Nx * dx, yUL]
print("Boudning box of data in tiff file :", tbb)
# Generate coordinates for tiff pixel centers
xm = 0.5 * dx + np.linspace(xUL, xUL + Nx * dx, Nx)
ym = 0.5 * dy + np.linspace(yUL, yUL + Ny * dy, Ny)
Explanation: This projection says that it's WGS1984 (same as GoogleEarth and GoogleMaps. Therefore it is in longitude (x) and latitute (y) coordinates. This allows to immediately compute the WGS coordinates (lat/lon) from it, for instance for each pixel/cell center. It's also straightforward to compute the bounding box of this array and plot it in QGIS for instance:
End of explanation
with open('ErfoudModelContour.kml', 'r') as f:
for s in f: # read lines from this file
if s.find('coord') > 0: # word "coord" bound?
# Then the next line has all coordinates. Read it and clean up.
pnts_as_str = f.readline().replace(' ',',').replace('\t','').split(',')
# Use a comprehension to put these coordinates in a list, where list[i] has
# a sublist of the three x, y and z coordinates.
points = [ [float(p) for p in p3]
for p3 in [pnts_as_str[i:i+3]
for i in range(0, len(pnts_as_str), 3)] ]
break;
# The points
pnts = np.array(points)
# The bounding box
mbb = [np.min(pnts[:,0]), np.min(pnts[:,1]), np.max(pnts[:,0]), np.max(pnts[:,1])]
#pprint(points)
#print(mbb)
Explanation: Generate a shapefile with a polyline that represents the model boundary
The contour coordinates of the Erfoud/Tafilalet groundwater model happen to be the file ErfoudModelContour.kml. Kml files come from GoogleEarth and are in WGS84 coordinates. It was obtained by digitizing the line directly in Google Earth.
We extract the coordinates from that HTML file and put them in a list of lists, the form needed to inject the coordinates into the shapefile.
Extraction can be done in several ways, for instance with one of the HTML parsers that are available on the internet. However if you look at this file in the internet, it's clear that we may do this in a simple way. Read the file line by line until we find the word "coordinates". Then read the next line, which contains all the coordinates. Then clean that line form tabs, put a comma between each tripple of coordinate values and turn it into a list of lists with each list the x, y an z values of one vertice of the model boundary:
End of explanation
import shapefile as shp
tb = lambda indices: [tbb[i] for i in indices] # convenience for selecting from tiff bounding box
mb = lambda indices: [mbb[i] for i in indices] # same for selecting from model bounding box
# open a shape file writer objetc
w = shp.Writer(shapeType=shp.POLYGON)
# add the three polylines to w.shapes
# each shape has parts of of which can contain a polyline. We have one polyline, i.e. one part
# in each chape. Therfore parts is a list of one item, which is a list of points of the polyline.
w.poly(parts=[points]) # only one part, therefore, put points inbetween brackets.
w.poly(parts=[[ tb([0, 1]), tb([2, 1]), tb([2, 3]), tb([0, 3]), tb([0, 1])]]) # bbox of tiff file
w.poly(parts=[[ mb([0, 1]), mb([2, 1]), mb([2, 3]), mb([0, 3]), mb([0, 1])]]) # bbox of model
w.field("Id","C", 20) # Add one field
w.field("Id2", "N") # Add another field, just to see if it works and how
# Aadd three records to w.records (one for eache shape
w.record("model contour", 1) # each record has two values, a string and a nuber, see fields
w.record("model bbox", 2)
w.record("Tiff bbox", 3)
# save this to a new shapefile
w.save("ErfoudModelContour")
# Change False to True so see the coordinates and the records
if False:
print()
for i, sh in enumerate(w.shapes()):
pprint(sh.points)
print()
#w.shapes()[0].points # check if w knows about these points
for r in w.records:
print(r)
# To verify what's been saved read the saved file and show what's in it:
if False:
s = shp.Reader("ErfoudModelContour")
for sh in s.shapeRecords():
pprint(sh.shape.points)
print(sh.record)
Explanation: Generate the shapefile holding a 3 polygons a) The bonding box around the data in the tiff file, b) the bounding box of around the model contour. 3) the model contour
End of explanation
A = g.GetRasterBand(1).ReadAsArray()
A[A <- 9000] = 0. # replace no-dta values by 0
print()
print("min precipitation in mm ", np.min(A))
print("max precipitation in mm ", np.max(A))
Explanation: Show shapefile in QGIS
Fire up QGIS and load the shape file. Set its CRS to WGS84 (same coordinates as GoogleMaps, most general LatLon)
Here are the pictures taken van de screen of QGIS after the shapefile was loaded and the label under properties was set tot transparent with solid contour line.
To get the GoogleMaps image, look for it it under web in the main menu.
The first image is zoomed out, so that the location of the model can be seen in the south east of this image. It's in Morocco.
<figure>
<IMG SRC="./EfoudModelContour2.png" WIDTH=750 ALIGN="center">
</figure>
The more detailed image shows the contour of the model and its bounding box. It proves that it works.
<figure>
<IMG SRC="./EfoudModelContour1.png" WIDTH=750 ALIGN="center">
</figure>
The next step is to select the appropriage precipitation data from the CHIRPS file.
Get the precipitation data from the CHIRPS tiff file
The actual data are stored in rasterbands. We saw from the size above, that this file has only one rasterband. Rasterband information is obtained one band at a time. So here we pass band number 1.
End of explanation
# define a function to get the indices of the center points between the bounding box extents of the model
def between(x, a, b):
returns indices of ponts between a and b
I = np.argwhere(np.logical_and(min(a, b) < x, x < max(a, b)))
return [i[0] for i in I]
ix = between(xm, mbb[0], mbb[2])
iy = between(ym, mbb[1], mbb[3])
print(ix)
print(iy)
Explanation: Select a subarea equal to the bbox of the model contour.
End of explanation
A = g.GetRasterBand(1).ReadAsArray(xoff=int(ix[0]), yoff=int(iy[0]), win_xsize=len(ix), win_ysize=len(iy))
print("Preciptation on the Erfoud model area in Marocco from file\n{}:\n".format(fname))
prar(A)
Explanation: Read the data again, but now only the part that covers the model in Marocco:
End of explanation
# The extent of this area can be obtained from the latiture and longitude together with the radius of the earth.
R = 6371 # km
EWN = R * np.cos(np.pi/180 * mbb[1]) * np.pi/180. *(mbb[2] - mbb[0])
EWS = R * np.cos(np.pi/180 * mbb[3]) * np.pi/180. *(mbb[2] - mbb[0])
NS = R * np.pi/180 * (mbb[3] - mbb[1])
print("The size of the bounding box in km:")
print("EW along the north boundary : ",EWN)
print("EW along the south boundary : ",EWS)
print("NS : ",NS)
print("Size of each tile (the resolution) = {:.3f} x {:.3f} km: ".format(EWN/A.shape[1], NS/A.shape[0]))
Explanation: Just for curiosity, show the size of the area covered and the size resolution of the precipitation data.
End of explanation |
9,055 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Examples and Exercises from Think Stats, 2nd Edition
http
Step1: Survival analysis
If we have an unbiased sample of complete lifetimes, we can compute the survival function from the CDF and the hazard function from the survival function.
Here's the distribution of pregnancy length in the NSFG dataset.
Step3: The survival function is just the complementary CDF.
Step4: Here's the CDF and SF.
Step5: And here's the hazard function.
Step6: Age at first marriage
We'll use the NSFG respondent file to estimate the hazard function and survival function for age at first marriage.
Step7: We have to clean up a few variables.
Step8: And the extract the age at first marriage for people who are married, and the age at time of interview for people who are not.
Step10: The following function uses Kaplan-Meier to estimate the hazard function.
Step11: Here is the hazard function and corresponding survival function.
Step14: Quantifying uncertainty
To see how much the results depend on random sampling, we'll use a resampling process again.
Step15: The following plot shows the survival function based on the raw data and a 90% CI based on resampling.
Step16: The SF based on the raw data falls outside the 90% CI because the CI is based on weighted resampling, and the raw data is not. You can confirm that by replacing ResampleRowsWeighted with ResampleRows in ResampleSurvival.
More data
To generate survivial curves for each birth cohort, we need more data, which we can get by combining data from several NSFG cycles.
Step20: The following is the code from survival.py that generates SFs broken down by decade of birth.
Step21: Here are the results for the combined data.
Step23: We can generate predictions by assuming that the hazard function of each generation will be the same as for the previous generation.
Step24: And here's what that looks like.
Step25: Remaining lifetime
Distributions with difference shapes yield different behavior for remaining lifetime as a function of age.
Step26: Here's the expected remaining duration of a pregnancy as a function of the number of weeks elapsed. After week 36, the process becomes "memoryless".
Step27: And here's the median remaining time until first marriage as a function of age.
Step33: Exercises
Exercise | Python Code:
from __future__ import print_function, division
%matplotlib inline
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
import numpy as np
import pandas as pd
import random
import thinkstats2
import thinkplot
Explanation: Examples and Exercises from Think Stats, 2nd Edition
http://thinkstats2.com
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
import nsfg
preg = nsfg.ReadFemPreg()
complete = preg.query('outcome in [1, 3, 4]').prglngth
cdf = thinkstats2.Cdf(complete, label='cdf')
Explanation: Survival analysis
If we have an unbiased sample of complete lifetimes, we can compute the survival function from the CDF and the hazard function from the survival function.
Here's the distribution of pregnancy length in the NSFG dataset.
End of explanation
import survival
def MakeSurvivalFromCdf(cdf, label=''):
Makes a survival function based on a CDF.
cdf: Cdf
returns: SurvivalFunction
ts = cdf.xs
ss = 1 - cdf.ps
return survival.SurvivalFunction(ts, ss, label)
sf = MakeSurvivalFromCdf(cdf, label='survival')
print(cdf[13])
print(sf[13])
Explanation: The survival function is just the complementary CDF.
End of explanation
thinkplot.Plot(sf)
thinkplot.Cdf(cdf, alpha=0.2)
thinkplot.Config(loc='center left')
Explanation: Here's the CDF and SF.
End of explanation
hf = sf.MakeHazardFunction(label='hazard')
print(hf[39])
thinkplot.Plot(hf)
thinkplot.Config(ylim=[0, 0.75], loc='upper left')
Explanation: And here's the hazard function.
End of explanation
resp6 = nsfg.ReadFemResp()
Explanation: Age at first marriage
We'll use the NSFG respondent file to estimate the hazard function and survival function for age at first marriage.
End of explanation
resp6.cmmarrhx.replace([9997, 9998, 9999], np.nan, inplace=True)
resp6['agemarry'] = (resp6.cmmarrhx - resp6.cmbirth) / 12.0
resp6['age'] = (resp6.cmintvw - resp6.cmbirth) / 12.0
Explanation: We have to clean up a few variables.
End of explanation
complete = resp6[resp6.evrmarry==1].agemarry.dropna()
ongoing = resp6[resp6.evrmarry==0].age
Explanation: And the extract the age at first marriage for people who are married, and the age at time of interview for people who are not.
End of explanation
from collections import Counter
def EstimateHazardFunction(complete, ongoing, label='', verbose=False):
Estimates the hazard function by Kaplan-Meier.
http://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator
complete: list of complete lifetimes
ongoing: list of ongoing lifetimes
label: string
verbose: whether to display intermediate results
if np.sum(np.isnan(complete)):
raise ValueError("complete contains NaNs")
if np.sum(np.isnan(ongoing)):
raise ValueError("ongoing contains NaNs")
hist_complete = Counter(complete)
hist_ongoing = Counter(ongoing)
ts = list(hist_complete | hist_ongoing)
ts.sort()
at_risk = len(complete) + len(ongoing)
lams = pd.Series(index=ts)
for t in ts:
ended = hist_complete[t]
censored = hist_ongoing[t]
lams[t] = ended / at_risk
if verbose:
print(t, at_risk, ended, censored, lams[t])
at_risk -= ended + censored
return survival.HazardFunction(lams, label=label)
Explanation: The following function uses Kaplan-Meier to estimate the hazard function.
End of explanation
hf = EstimateHazardFunction(complete, ongoing)
thinkplot.Plot(hf)
thinkplot.Config(xlabel='Age (years)',
ylabel='Hazard')
sf = hf.MakeSurvival()
thinkplot.Plot(sf)
thinkplot.Config(xlabel='Age (years)',
ylabel='Prob unmarried',
ylim=[0, 1])
Explanation: Here is the hazard function and corresponding survival function.
End of explanation
def EstimateMarriageSurvival(resp):
Estimates the survival curve.
resp: DataFrame of respondents
returns: pair of HazardFunction, SurvivalFunction
# NOTE: Filling missing values would be better than dropping them.
complete = resp[resp.evrmarry == 1].agemarry.dropna()
ongoing = resp[resp.evrmarry == 0].age
hf = EstimateHazardFunction(complete, ongoing)
sf = hf.MakeSurvival()
return hf, sf
def ResampleSurvival(resp, iters=101):
Resamples respondents and estimates the survival function.
resp: DataFrame of respondents
iters: number of resamples
_, sf = EstimateMarriageSurvival(resp)
thinkplot.Plot(sf)
low, high = resp.agemarry.min(), resp.agemarry.max()
ts = np.arange(low, high, 1/12.0)
ss_seq = []
for _ in range(iters):
sample = thinkstats2.ResampleRowsWeighted(resp)
_, sf = EstimateMarriageSurvival(sample)
ss_seq.append(sf.Probs(ts))
low, high = thinkstats2.PercentileRows(ss_seq, [5, 95])
thinkplot.FillBetween(ts, low, high, color='gray', label='90% CI')
Explanation: Quantifying uncertainty
To see how much the results depend on random sampling, we'll use a resampling process again.
End of explanation
ResampleSurvival(resp6)
thinkplot.Config(xlabel='Age (years)',
ylabel='Prob unmarried',
xlim=[12, 46],
ylim=[0, 1],
loc='upper right')
Explanation: The following plot shows the survival function based on the raw data and a 90% CI based on resampling.
End of explanation
resp5 = survival.ReadFemResp1995()
resp6 = survival.ReadFemResp2002()
resp7 = survival.ReadFemResp2010()
resps = [resp5, resp6, resp7]
Explanation: The SF based on the raw data falls outside the 90% CI because the CI is based on weighted resampling, and the raw data is not. You can confirm that by replacing ResampleRowsWeighted with ResampleRows in ResampleSurvival.
More data
To generate survivial curves for each birth cohort, we need more data, which we can get by combining data from several NSFG cycles.
End of explanation
def AddLabelsByDecade(groups, **options):
Draws fake points in order to add labels to the legend.
groups: GroupBy object
thinkplot.PrePlot(len(groups))
for name, _ in groups:
label = '%d0s' % name
thinkplot.Plot([15], [1], label=label, **options)
def EstimateMarriageSurvivalByDecade(groups, **options):
Groups respondents by decade and plots survival curves.
groups: GroupBy object
thinkplot.PrePlot(len(groups))
for _, group in groups:
_, sf = EstimateMarriageSurvival(group)
thinkplot.Plot(sf, **options)
def PlotResampledByDecade(resps, iters=11, predict_flag=False, omit=None):
Plots survival curves for resampled data.
resps: list of DataFrames
iters: number of resamples to plot
predict_flag: whether to also plot predictions
for i in range(iters):
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
sample = pd.concat(samples, ignore_index=True)
groups = sample.groupby('decade')
if omit:
groups = [(name, group) for name, group in groups
if name not in omit]
# TODO: refactor this to collect resampled estimates and
# plot shaded areas
if i == 0:
AddLabelsByDecade(groups, alpha=0.7)
if predict_flag:
PlotPredictionsByDecade(groups, alpha=0.1)
EstimateMarriageSurvivalByDecade(groups, alpha=0.1)
else:
EstimateMarriageSurvivalByDecade(groups, alpha=0.2)
Explanation: The following is the code from survival.py that generates SFs broken down by decade of birth.
End of explanation
PlotResampledByDecade(resps)
thinkplot.Config(xlabel='Age (years)',
ylabel='Prob unmarried',
xlim=[13, 45],
ylim=[0, 1])
Explanation: Here are the results for the combined data.
End of explanation
def PlotPredictionsByDecade(groups, **options):
Groups respondents by decade and plots survival curves.
groups: GroupBy object
hfs = []
for _, group in groups:
hf, sf = EstimateMarriageSurvival(group)
hfs.append(hf)
thinkplot.PrePlot(len(hfs))
for i, hf in enumerate(hfs):
if i > 0:
hf.Extend(hfs[i-1])
sf = hf.MakeSurvival()
thinkplot.Plot(sf, **options)
Explanation: We can generate predictions by assuming that the hazard function of each generation will be the same as for the previous generation.
End of explanation
PlotResampledByDecade(resps, predict_flag=True)
thinkplot.Config(xlabel='Age (years)',
ylabel='Prob unmarried',
xlim=[13, 45],
ylim=[0, 1])
Explanation: And here's what that looks like.
End of explanation
preg = nsfg.ReadFemPreg()
complete = preg.query('outcome in [1, 3, 4]').prglngth
print('Number of complete pregnancies', len(complete))
ongoing = preg[preg.outcome == 6].prglngth
print('Number of ongoing pregnancies', len(ongoing))
hf = EstimateHazardFunction(complete, ongoing)
sf1 = hf.MakeSurvival()
Explanation: Remaining lifetime
Distributions with difference shapes yield different behavior for remaining lifetime as a function of age.
End of explanation
rem_life1 = sf1.RemainingLifetime()
thinkplot.Plot(rem_life1)
thinkplot.Config(title='Remaining pregnancy length',
xlabel='Weeks',
ylabel='Mean remaining weeks')
Explanation: Here's the expected remaining duration of a pregnancy as a function of the number of weeks elapsed. After week 36, the process becomes "memoryless".
End of explanation
hf, sf2 = EstimateMarriageSurvival(resp6)
func = lambda pmf: pmf.Percentile(50)
rem_life2 = sf2.RemainingLifetime(filler=np.inf, func=func)
thinkplot.Plot(rem_life2)
thinkplot.Config(title='Years until first marriage',
ylim=[0, 15],
xlim=[11, 31],
xlabel='Age (years)',
ylabel='Median remaining years')
Explanation: And here's the median remaining time until first marriage as a function of age.
End of explanation
def CleanData(resp):
Cleans respondent data.
resp: DataFrame
resp.cmdivorcx.replace([9998, 9999], np.nan, inplace=True)
resp['notdivorced'] = resp.cmdivorcx.isnull().astype(int)
resp['duration'] = (resp.cmdivorcx - resp.cmmarrhx) / 12.0
resp['durationsofar'] = (resp.cmintvw - resp.cmmarrhx) / 12.0
month0 = pd.to_datetime('1899-12-15')
dates = [month0 + pd.DateOffset(months=cm)
for cm in resp.cmbirth]
resp['decade'] = (pd.DatetimeIndex(dates).year - 1900) // 10
CleanData(resp6)
married6 = resp6[resp6.evrmarry==1]
CleanData(resp7)
married7 = resp7[resp7.evrmarry==1]
# Solution
def ResampleDivorceCurve(resps):
Plots divorce curves based on resampled data.
resps: list of respondent DataFrames
for _ in range(11):
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
sample = pd.concat(samples, ignore_index=True)
PlotDivorceCurveByDecade(sample, color='#225EA8', alpha=0.1)
thinkplot.Show(xlabel='years',
axis=[0, 28, 0, 1])
# Solution
def ResampleDivorceCurveByDecade(resps):
Plots divorce curves for each birth cohort.
resps: list of respondent DataFrames
for i in range(41):
samples = [thinkstats2.ResampleRowsWeighted(resp)
for resp in resps]
sample = pd.concat(samples, ignore_index=True)
groups = sample.groupby('decade')
if i == 0:
survival.AddLabelsByDecade(groups, alpha=0.7)
EstimateSurvivalByDecade(groups, alpha=0.1)
thinkplot.Config(xlabel='Years',
ylabel='Fraction undivorced',
axis=[0, 28, 0, 1])
# Solution
def EstimateSurvivalByDecade(groups, **options):
Groups respondents by decade and plots survival curves.
groups: GroupBy object
thinkplot.PrePlot(len(groups))
for name, group in groups:
_, sf = EstimateSurvival(group)
thinkplot.Plot(sf, **options)
# Solution
def EstimateSurvival(resp):
Estimates the survival curve.
resp: DataFrame of respondents
returns: pair of HazardFunction, SurvivalFunction
complete = resp[resp.notdivorced == 0].duration.dropna()
ongoing = resp[resp.notdivorced == 1].durationsofar.dropna()
hf = survival.EstimateHazardFunction(complete, ongoing)
sf = hf.MakeSurvival()
return hf, sf
# Solution
ResampleDivorceCurveByDecade([married6, married7])
Explanation: Exercises
Exercise: In NSFG Cycles 6 and 7, the variable cmdivorcx contains the date of divorce for the respondent’s first marriage, if applicable, encoded in century-months.
Compute the duration of marriages that have ended in divorce, and the duration, so far, of marriages that are ongoing. Estimate the hazard and survival curve for the duration of marriage.
Use resampling to take into account sampling weights, and plot data from several resamples to visualize sampling error.
Consider dividing the respondents into groups by decade of birth, and possibly by age at first marriage.
End of explanation |
9,056 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook we'll look at interfacing between the composability and ability to generate complex visualizations that HoloViews provides, the power of pandas library dataframes for manipulating tabular data, and the great looking statistical plots and analyses provided by the Seaborn library.
We also explore how a pandas DFrame can be wrapped in a general purpose Element type, which can either be used to convert the data into other standard Element types or be visualized directly using a wide array of Seaborn-based plotting options, including
Step1: We can now select static and animation backends
Step2: Visualizing Distributions of Data <a id='Histogram'></a>
If import seaborn succeeds, HoloViews will provide a number of additional Element types, including Distribution, Bivariate, TimeSeries, Regression, and DFrame (a Seaborn-visualizable version of the DFrame Element class provided when only pandas is available).
We'll start by generating a number of Distribution Elements containing normal distributions with different means and standard deviations and overlaying them. Using the %%opts magic you can specify specific plot and style options as usual; here we deactivate the default histogram and shade the kernel density estimate
Step3: Thanks to Seaborn you can choose to plot your distribution as histograms, kernel density estimates, or rug plots
Step4: We can also visualize the same data with Bivariate distributions
Step5: This plot type also has the option of enabling a joint plot with marginal distribution along each axis, and the kind option lets you control whether to visualize the distribution as a scatter, reg, resid, kde or hex plot
Step6: Bivariate plots also support overlaying and animations, so let's generate some two dimensional normally distributed data with varying mean and standard deviation.
Working with TimeSeries data
Next let's take a look at the TimeSeries View type, which allows you to visualize statistical time-series data. TimeSeries data can take the form of a number of observations of some dependent variable at multiple timepoints. By controlling the plot and style option the data can be visualized in a number of ways, including confidence intervals, error bars, traces or scatter points.
Let's begin by defining a function to generate sine wave time courses with varying phase and noise levels.
Step7: Now we can create HoloMaps of sine and cosine curves with varying levels of observational and independent error.
Step8: First let's visualize the sine stack with a confidence interval
Step9: And the cosine stack with error bars
Step10: Since the %%opts cell magic has applied the style to each object individually, we can now overlay the two with different visualization styles in the same plot
Step11: Let's apply the databounds across the HoloMap again and visualize all the observations as unit points
Step12: Working with pandas DataFrames
In order to make this a little more interesting, we can use some of the real-world datasets provided with the Seaborn library. The holoviews DFrame object can be used to wrap the Seaborn-generated pandas dataframes like this
Step13: By default the DFrame simply inherits the column names of the data frames and converts them into Dimensions. This works very well as a default, but if you wish to override it, you can either supply an explicit list of key dimensions to the DFrame object or a dimensions dictionary, which maps from the column name to the appropriate Dimension object. In this case, we define a Month Dimension, which defines the ordering of months
Step14: Flight passenger data
Now we can easily use the conversion methods on the DFrame object to create HoloViews Elements, e.g. a Seaborn-based TimeSeries Element and a HoloViews standard HeatMap
Step15: Tipping data <a id='Regression'/>
A simple regression can easily be visualized using the Regression Element type. However, here we'll also split out smoker and sex as Dimensions, overlaying the former and laying out the latter, so that we can compare tipping between smokers and non-smokers, separately for males and females.
Step16: When you're dealing with higher dimensional data you can also work with pandas dataframes directly by displaying the DFrame Element directly. This allows you to perform all the standard HoloViews operations on more complex Seaborn and pandas plot types, as explained in the following sections.
Iris Data <a id='Box'></a>
Let's visualize the relationship between sepal length and width in the Iris flower dataset. Here we can make use of some of the inbuilt Seaborn plot types, a pairplot which can plot each variable in a dataset against each other variable. We can customize this plot further by passing arguments via the style options, to define what plot types the pairplot will use and define the dimension to which we will apply the hue option.
Step17: When working with a DFrame object directly, you can select particular columns of your DFrame to visualize by supplying x and y parameters corresponding to the Dimensions or columns you want visualize. Here we'll visualize the sepal_width and sepal_length by species as a box plot and violin plot, respectively.
Step18: Titanic passenger data <a id='Correlation'></a>
The Titanic passenger data is a truly large dataset, so we can make use of some of the more advanced features of Seaborn and pandas. Above we saw the usage of a pairgrid, which allows you to quickly compare each variable in your dataset. HoloViews also support Seaborn based FacetGrids. The FacetGrid specification is simply passed via the style options, where the map keyword should be supplied as a tuple of the plotting function to use and the Dimensions to place on the x axis and y axis. You may also specify the Dimensions to lay out along the rows and columns of the plot, and the hue groups
Step19: FacetGrids support most Seaborn and matplotlib plot types
Step20: Finally, we can summarize our data using a correlation plot and split out Dimensions using the .holomap method, which groups by the specified dimension, giving you a frame for each value along that Dimension. Here we group by the survived Dimension (with 1 if the passenger survived and 0 otherwise), which thus provides a widget to allow us to compare those two values. | Python Code:
import itertools
import numpy as np
import pandas as pd
import seaborn as sb
import holoviews as hv
np.random.seed(9221999)
Explanation: In this notebook we'll look at interfacing between the composability and ability to generate complex visualizations that HoloViews provides, the power of pandas library dataframes for manipulating tabular data, and the great looking statistical plots and analyses provided by the Seaborn library.
We also explore how a pandas DFrame can be wrapped in a general purpose Element type, which can either be used to convert the data into other standard Element types or be visualized directly using a wide array of Seaborn-based plotting options, including:
regression plots
correlation plots
box plots
autocorrelation plots
scatter matrices
histograms
scatter or line plots
This tutorial assumes you're already familiar with some of the core concepts of HoloViews, which are explained in the other Tutorials.
This tutorial requires NumPy, Pandas, and Seaborn to be installed and imported:
End of explanation
%reload_ext holoviews.ipython
%output holomap='widgets' fig='svg'
Explanation: We can now select static and animation backends:
End of explanation
%%opts Distribution (hist=False kde_kws=dict(shade=True))
d1 = 25 * np.random.randn(500) + 450
d2 = 45 * np.random.randn(500) + 540
d3 = 55 * np.random.randn(500) + 590
hv.Distribution(d1, label='Blue') *\
hv.Distribution(d2, label='Red') *\
hv.Distribution(d3, label='Yellow')
Explanation: Visualizing Distributions of Data <a id='Histogram'></a>
If import seaborn succeeds, HoloViews will provide a number of additional Element types, including Distribution, Bivariate, TimeSeries, Regression, and DFrame (a Seaborn-visualizable version of the DFrame Element class provided when only pandas is available).
We'll start by generating a number of Distribution Elements containing normal distributions with different means and standard deviations and overlaying them. Using the %%opts magic you can specify specific plot and style options as usual; here we deactivate the default histogram and shade the kernel density estimate:
End of explanation
%%opts Distribution (rug=True kde_kws={'color':'indianred','linestyle':'--'})
hv.Distribution(np.random.randn(10), kdims=['Activity'])
Explanation: Thanks to Seaborn you can choose to plot your distribution as histograms, kernel density estimates, or rug plots:
End of explanation
%%opts Bivariate.A (shade=True cmap='Blues') Bivariate.B (shade=True cmap='Reds') Bivariate.C (shade=True cmap='Greens')
hv.Bivariate(np.array([d1, d2]).T, group='A') +\
hv.Bivariate(np.array([d1, d3]).T, group='B') +\
hv.Bivariate(np.array([d2, d3]).T, group='C')
Explanation: We can also visualize the same data with Bivariate distributions:
End of explanation
%%opts Bivariate [joint=True] (kind='kde' cmap='Blues')
hv.Bivariate(np.array([d1, d2]).T, group='A')
Explanation: This plot type also has the option of enabling a joint plot with marginal distribution along each axis, and the kind option lets you control whether to visualize the distribution as a scatter, reg, resid, kde or hex plot:
End of explanation
def sine_wave(n_x, obs_err_sd=1.5, tp_err_sd=.3, phase=0):
x = np.linspace(0+phase, (n_x - 1) / 2+phase, n_x)
y = np.sin(x) + np.random.normal(0, obs_err_sd) + np.random.normal(0, tp_err_sd, n_x)
return y
Explanation: Bivariate plots also support overlaying and animations, so let's generate some two dimensional normally distributed data with varying mean and standard deviation.
Working with TimeSeries data
Next let's take a look at the TimeSeries View type, which allows you to visualize statistical time-series data. TimeSeries data can take the form of a number of observations of some dependent variable at multiple timepoints. By controlling the plot and style option the data can be visualized in a number of ways, including confidence intervals, error bars, traces or scatter points.
Let's begin by defining a function to generate sine wave time courses with varying phase and noise levels.
End of explanation
sine_stack = hv.HoloMap(kdims=['Observation error','Random error'])
cos_stack = hv.HoloMap(kdims=['Observation error', 'Random error'])
for oe, te in itertools.product(np.linspace(0.5,2,4), np.linspace(0.5,2,4)):
sines = np.array([sine_wave(31, oe, te) for _ in range(20)])
sine_stack[(oe, te)] = hv.TimeSeries(sines, label='Sine', group='Activity',
kdims=['Time', 'Observation'])
cosines = np.array([sine_wave(31, oe, te, phase=np.pi) for _ in range(20)])
cos_stack[(oe, te)] = hv.TimeSeries(cosines, group='Activity',label='Cosine',
kdims=['Time', 'Observation'])
Explanation: Now we can create HoloMaps of sine and cosine curves with varying levels of observational and independent error.
End of explanation
%%opts TimeSeries [apply_databounds=True] (ci=95 color='indianred')
sine_stack
Explanation: First let's visualize the sine stack with a confidence interval:
End of explanation
%%opts TimeSeries (err_style='ci_bars')
cos_stack.last
Explanation: And the cosine stack with error bars:
End of explanation
cos_stack.last * sine_stack.last
Explanation: Since the %%opts cell magic has applied the style to each object individually, we can now overlay the two with different visualization styles in the same plot:
End of explanation
%%opts TimeSeries (err_style='unit_points')
sine_stack * cos_stack
Explanation: Let's apply the databounds across the HoloMap again and visualize all the observations as unit points:
End of explanation
iris = hv.DFrame(sb.load_dataset("iris"))
tips = hv.DFrame(sb.load_dataset("tips"))
titanic = hv.DFrame(sb.load_dataset("titanic"))
Explanation: Working with pandas DataFrames
In order to make this a little more interesting, we can use some of the real-world datasets provided with the Seaborn library. The holoviews DFrame object can be used to wrap the Seaborn-generated pandas dataframes like this:
End of explanation
flights_data = sb.load_dataset('flights')
dimensions = {'month': hv.Dimension('Month', values=list(flights_data.month[0:12])),
'passengers': hv.Dimension('Passengers', type=int),
'year': hv.Dimension('Year', type=int)}
flights = hv.DFrame(flights_data, dimensions=dimensions)
%output fig='png' dpi=100 size=150
Explanation: By default the DFrame simply inherits the column names of the data frames and converts them into Dimensions. This works very well as a default, but if you wish to override it, you can either supply an explicit list of key dimensions to the DFrame object or a dimensions dictionary, which maps from the column name to the appropriate Dimension object. In this case, we define a Month Dimension, which defines the ordering of months:
End of explanation
%%opts TimeSeries (err_style='unit_traces' err_palette='husl') HeatMap [xrotation=30 aspect=2]
flights.timeseries(['Year', 'Month'], 'Passengers', label='Airline', group='Passengers') +\
flights.heatmap(['Year', 'Month'], 'Passengers', label='Airline', group='Passengers')
Explanation: Flight passenger data
Now we can easily use the conversion methods on the DFrame object to create HoloViews Elements, e.g. a Seaborn-based TimeSeries Element and a HoloViews standard HeatMap:
End of explanation
%%opts Regression [apply_databounds=True]
tips.regression('total_bill', 'tip', mdims=['smoker','sex'],
extents=(0, 0, 50, 10), reduce_fn=np.mean).overlay('smoker').layout('sex')
Explanation: Tipping data <a id='Regression'/>
A simple regression can easily be visualized using the Regression Element type. However, here we'll also split out smoker and sex as Dimensions, overlaying the former and laying out the latter, so that we can compare tipping between smokers and non-smokers, separately for males and females.
End of explanation
%%opts DFrame (diag_kind='kde' kind='reg' hue='species')
iris.clone(label="Iris Data", plot_type='pairplot')
Explanation: When you're dealing with higher dimensional data you can also work with pandas dataframes directly by displaying the DFrame Element directly. This allows you to perform all the standard HoloViews operations on more complex Seaborn and pandas plot types, as explained in the following sections.
Iris Data <a id='Box'></a>
Let's visualize the relationship between sepal length and width in the Iris flower dataset. Here we can make use of some of the inbuilt Seaborn plot types, a pairplot which can plot each variable in a dataset against each other variable. We can customize this plot further by passing arguments via the style options, to define what plot types the pairplot will use and define the dimension to which we will apply the hue option.
End of explanation
%%opts DFrame [show_grid=False]
iris.clone(x='species', y='sepal_width', plot_type='boxplot') + iris.clone(x='species', y='sepal_length', plot_type='violinplot')
Explanation: When working with a DFrame object directly, you can select particular columns of your DFrame to visualize by supplying x and y parameters corresponding to the Dimensions or columns you want visualize. Here we'll visualize the sepal_width and sepal_length by species as a box plot and violin plot, respectively.
End of explanation
%%opts DFrame (map=('barplot', 'alive', 'age') col='class' row='sex' hue='pclass' aspect=1.0)
titanic.clone(plot_type='facetgrid')
Explanation: Titanic passenger data <a id='Correlation'></a>
The Titanic passenger data is a truly large dataset, so we can make use of some of the more advanced features of Seaborn and pandas. Above we saw the usage of a pairgrid, which allows you to quickly compare each variable in your dataset. HoloViews also support Seaborn based FacetGrids. The FacetGrid specification is simply passed via the style options, where the map keyword should be supplied as a tuple of the plotting function to use and the Dimensions to place on the x axis and y axis. You may also specify the Dimensions to lay out along the rows and columns of the plot, and the hue groups:
End of explanation
%%opts DFrame (map=('regplot', 'age', 'fare') col='class' hue='class')
titanic.clone(plot_type='facetgrid')
Explanation: FacetGrids support most Seaborn and matplotlib plot types:
End of explanation
%%output holomap='widgets' size=200
titanic.clone(titanic.data.dropna(), plot_type='corrplot').holomap(['survived'])
Explanation: Finally, we can summarize our data using a correlation plot and split out Dimensions using the .holomap method, which groups by the specified dimension, giving you a frame for each value along that Dimension. Here we group by the survived Dimension (with 1 if the passenger survived and 0 otherwise), which thus provides a widget to allow us to compare those two values.
End of explanation |
9,057 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
For high dpi displays.
Step1: 0. General note
This example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2015.
1. Global setup
Step2: 3. Compare
Step3: Table is not given for this publication. | Python Code:
%config InlineBackend.figure_format = 'retina'
Explanation: For high dpi displays.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
from uncertainties import unumpy as unp
import pytheos as eos
Explanation: 0. General note
This example compares pressure calculated from pytheos and original publication for the gold scale by Dorogokupets 2015.
1. Global setup
End of explanation
eta = np.linspace(1., 0.65, 8)
print(eta)
dorogokupets2015_au = eos.gold.Dorogokupets2015()
help(dorogokupets2015_au)
dorogokupets2015_au.print_equations()
dorogokupets2015_au.print_equations()
dorogokupets2015_au.print_parameters()
v0 = 67.84742110765599
dorogokupets2015_au.three_r
v = v0 * (eta)
temp = 2500.
p = dorogokupets2015_au.cal_p(v, temp * np.ones_like(v))
print('for T = ', temp)
for eta_i, p_i in zip(eta, p):
print("{0: .3f} {1: .2f}".format(eta_i, p_i))
Explanation: 3. Compare
End of explanation
v = dorogokupets2015_au.cal_v(p, temp * np.ones_like(p), min_strain=0.6)
print((v/v0))
Explanation: Table is not given for this publication.
End of explanation |
9,058 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Code Testing and CI
Version 0.1
The notebook contains problems about code testing and continuous integration.
E Tollerud (STScI)
Problem 1
Step1: 1b
Step2: 1d
Step3: 1e
Step4: 1f
Step5: 1g
Step6: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Problem 2
Step7: 2b
This test has an intentional bug... but depending how you right the test you might not catch it... Use unit tests to find it! (and then fix it...)
Step9: 2c
There are (at least) two significant bugs in this code (one fairly apparent, one much more subtle). Try to catch them both, and write a regression test that covers those cases once you've found them.
One note about this function
Step11: 2d
Hint
Step12: Problem 3
Step13: 3b
Step14: Be sure to commit and push this to github before proceeding | Python Code:
!conda install pytest pytest-cov
Explanation: Code Testing and CI
Version 0.1
The notebook contains problems about code testing and continuous integration.
E Tollerud (STScI)
Problem 1: Set up py.test in you repo
In this problem we'll aim to get the py.test testing framework up and running in the code repository you set up in the last set of problems. We can then use it to collect and run tests of the code.
1a: Ensure py.test is installed
Of course py.test must actually be installed before you can use it. The commands below should work for the Anaconda Python Distribution, but if you have some other Python installation you'll want to install pytest (and its coverage plugin) as directed in the install instructions for py.test.
End of explanation
!mkdir #complete
!touch #complete
%%file <yourpackage>/tests/test_something.py
def test_something_func():
assert #complete
Explanation: 1b: Ensure your repo has code suitable for unit tests
Depending on what your code actually does, you might need to modify it to actually perform something testable. For example, if all it does is print something, you might find it difficult to write an effective unit test. Try adding a function that actually performs some operation and returns something different depending on various inputs. That tends to be the easiest function to unit-test: one with a clear "right" answer in certain situations.
Also be sure you have cded to the root of the repo for pytest to operate correctly.
1c: Add a test file with a test function
The test must be part of the package and follow the convention that the file and the function begin with test to get picked up by the test collection machinery. Inside the test function, you'll need some code that fails if the test condition fails. The easiest way to do this is with an assert statement, which raises an error if its first argument is False.
Hint: remember that to be a valid python package, a directory must have an __init__.py
End of explanation
from <yourpackage>.tests import test_something
test_something.test_something_func()
Explanation: 1d: Run the test directly
While this is not how you'd ordinarily run the tests, it's instructive to first try to execute the test directly, without using any fancy test framework. If your test function just runs, all is good. If you get an exception, the test failed (which in this case might be good).
Hint: you may need to use reload or just re-start your notebook kernel to get the cell below to recognize the changes.
End of explanation
!py.test
Explanation: 1e: Run the tests with py.test
Once you have an example test, you can try invoking py.test, which is how you should run the tests in the future. This should yield a report that shows a dot for each test. If all you see are dots, the tests ran sucessfully. But if there's a failure, you'll see the error, and the traceback showing where the error happened.
End of explanation
!py.test
Explanation: 1f: Make the test fail (or succeed...)
If your test failed when you ran it, you should now try to fix the test (or the code...) to make it work. Try running
(Modify your test to fail if it succeeded before, or vice versa)
End of explanation
!py.test --cov=<yourproject> tests/ #complete
Explanation: 1g: Check coverage
The coverage plugin we installed will let you check which lines of your code are actually run by the testing suite.
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
# `math` here is for *scalar* math... normally you'd use numpy but this makes it a bit simpler to debug
import math
inf = float('inf') # this is a quick-and-easy way to get the "infinity" value
def function_a(angle=180):
anglerad = math.radians(angle)
return math.sin(anglerad/2)/math.sin(anglerad)
Explanation: This should yield a report, which you can use to decide if you need to add more tests to acheive complete coverage. Check out the command line arguments to see if you can get a more detailed line-by-line report.
Problem 2: Implement some unit tests
The sub-problems below each contain different unit testing complications. Place the code from the snippets in your repository (either using an editor or the %%file trick), and write tests to ensure the correctness of the functions. Try to achieve 100% coverage for all of them (especially to catch some hidden bugs!).
Also, note that some of these examples are not really practical - that is, you wouldn't want to do this in real code because there's better ways to do it. But because of that, they are good examples of where something can go subtly wrong... and therefore where you want to make tests!
2a
When you have a function with a default, it's wise to test both the with-default call (function_b()), and when you give a value (function_b(1.2))
Hint: Beware of numbers that come close to 0... write your tests to accomodate floating-point errors!
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
def function_b(value):
if value < 0:
return value - 1
else:
value2 = subfunction_b(value + 1)
return value + value2
def subfunction_b(inp):
vals_to_accum = []
for i in range(10):
vals_to_accum.append(inp ** (i/10))
if vals_to_accum[-1] > 2:
vals.append(100)
# really you would use numpy to do this kind of number-crunching... but we're doing this for the sake of example right now
return sum(vals_to_accum)
Explanation: 2b
This test has an intentional bug... but depending how you right the test you might not catch it... Use unit tests to find it! (and then fix it...)
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import math
# know that to not have to worry about this, you should just use `astropy.coordinates`.
def angle_to_sexigesimal(angle_in_degrees, decimals=3):
Convert the given angle to a sexigesimal string of hours of RA.
Parameters
----------
angle_in_degrees : float
A scalar angle, expressed in degrees
Returns
-------
hms_str : str
The sexigesimal string giving the hours, minutes, and seconds of RA for the given `angle_in_degrees`
if math.floor(decimals) != decimals:
raise ValueError('decimals should be an integer!')
hours_num = angle_in_degrees*24/180
hours = math.floor(hours_num)
min_num = (hours_num - hours)*60
minutes = math.floor(min_num)
seconds = (min_num - minutes)*60
format_string = '{}:{}:{:.' + str(decimals) + 'f}'
return format_string.format(hours, minutes, seconds)
Explanation: 2c
There are (at least) two significant bugs in this code (one fairly apparent, one much more subtle). Try to catch them both, and write a regression test that covers those cases once you've found them.
One note about this function: in real code you're probably better off just using the Angle object from astropy.coordinates. But this example demonstrates one of the reasons why that was created, as it's very easy to write a buggy version of this code.
Hint: you might find it useful to use astropy.coordinates.Angle to create test cases...
End of explanation
#%%file <yourproject>/<filename>.py #complete, or just use your editor
import numpy as np
def function_d(array1=np.arange(10)*2, array2=np.arange(10), operation='-'):
Makes a matrix where the [i,j]th element is array1[i] <operation> array2[j]
if operation == '+':
return array1[:, np.newaxis] + array2
elif operation == '-':
return array1[:, np.newaxis] - array2
elif operation == '*':
return array1[:, np.newaxis] * array2
elif operation == '/':
return array1[:, np.newaxis] / array2
else:
raise ValueError('Unrecognized operation "{}"'.format(operation))
Explanation: 2d
Hint: numpy has some useful functions in numpy.testing for comparing arrays.
End of explanation
!py.test
Explanation: Problem 3: Set up travis to run your tests whenever a change is made
Now that you have a testing suite set up, you can try to turn on a continuous integration service to constantly check that any update you might send doesn't create a bug. We will the Travis-CI service for this purpose, as it has one of the lowest barriers to entry from Github.
3a: Ensure the test suite is passing locally
Seems obvious, but it's easy to forget to check this and only later realize that all the trouble you thought you had setting up the CI service was because the tests were actually broken...
End of explanation
%%file .travis.yml
language: python
python:
- "3.6"
# command to install dependencies
#install: "pip install numpy" #uncomment this if your code depends on numpy or similar
# command to run tests
script: pytest
Explanation: 3b: Set up an account on travis
This turns out to be quite convenient. If you go to the Travis web site, you'll see a "Sign in with GitHub" button. You'll need to authorize Travis, but once you've done so it will automatically log you in and know which repositories are yours.
3c: Create a minimal .travis.yml file.
Before we can activate travis on our repo, we need to tell travis a variety of metadata about what's in the repository and how to run it. The template below should be sufficient for the simplest needs.
End of explanation
!git #complete
Explanation: Be sure to commit and push this to github before proceeding:
End of explanation |
9,059 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute ICA on MEG data and remove artifacts
ICA is fit to MEG raw data.
The sources matching the ECG and EOG are automatically found and displayed.
Subsequently, artifact detection and rejection quality are assessed.
Step1: Setup paths and prepare raw data
Step2: 1) Fit ICA model using the FastICA algorithm
Step3: 2) identify bad components by analyzing latent sources.
Step4: 3) Assess component selection and unmixing quality | Python Code:
# Authors: Denis Engemann <[email protected]>
# Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.preprocessing import ICA
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
from mne.datasets import sample
Explanation: Compute ICA on MEG data and remove artifacts
ICA is fit to MEG raw data.
The sources matching the ECG and EOG are automatically found and displayed.
Subsequently, artifact detection and rejection quality are assessed.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True, add_eeg_ref=False)
raw.filter(1, 45, n_jobs=1, l_trans_bandwidth=0.5, h_trans_bandwidth=0.5,
filter_length='10s', phase='zero-double')
Explanation: Setup paths and prepare raw data
End of explanation
# Other available choices are `infomax` or `extended-infomax`
# We pass a float value between 0 and 1 to select n_components based on the
# percentage of variance explained by the PCA components.
ica = ICA(n_components=0.95, method='fastica')
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
ica.fit(raw, picks=picks, decim=3, reject=dict(mag=4e-12, grad=4000e-13))
# maximum number of components to reject
n_max_ecg, n_max_eog = 3, 1 # here we don't expect horizontal EOG components
Explanation: 1) Fit ICA model using the FastICA algorithm
End of explanation
title = 'Sources related to %s artifacts (red)'
# generate ECG epochs use detection via phase statistics
ecg_epochs = create_ecg_epochs(raw, tmin=-.5, tmax=.5, picks=picks)
ecg_inds, scores = ica.find_bads_ecg(ecg_epochs, method='ctps')
ica.plot_scores(scores, exclude=ecg_inds, title=title % 'ecg', labels='ecg')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=ecg_inds, title=title % 'ecg')
ica.plot_components(ecg_inds, title=title % 'ecg', colorbar=True)
ecg_inds = ecg_inds[:n_max_ecg]
ica.exclude += ecg_inds
# detect EOG by correlation
eog_inds, scores = ica.find_bads_eog(raw)
ica.plot_scores(scores, exclude=eog_inds, title=title % 'eog', labels='eog')
show_picks = np.abs(scores).argsort()[::-1][:5]
ica.plot_sources(raw, show_picks, exclude=eog_inds, title=title % 'eog')
ica.plot_components(eog_inds, title=title % 'eog', colorbar=True)
eog_inds = eog_inds[:n_max_eog]
ica.exclude += eog_inds
Explanation: 2) identify bad components by analyzing latent sources.
End of explanation
# estimate average artifact
ecg_evoked = ecg_epochs.average()
ica.plot_sources(ecg_evoked, exclude=ecg_inds) # plot ECG sources + selection
ica.plot_overlay(ecg_evoked, exclude=ecg_inds) # plot ECG cleaning
eog_evoked = create_eog_epochs(raw, tmin=-.5, tmax=.5, picks=picks).average()
ica.plot_sources(eog_evoked, exclude=eog_inds) # plot EOG sources + selection
ica.plot_overlay(eog_evoked, exclude=eog_inds) # plot EOG cleaning
# check the amplitudes do not change
ica.plot_overlay(raw) # EOG artifacts remain
# To save an ICA solution you can say:
# ica.save('my_ica.fif')
# You can later load the solution by saying:
# from mne.preprocessing import read_ica
# read_ica('my_ica.fif')
# Apply the solution to Raw, Epochs or Evoked like this:
# ica.apply(epochs)
Explanation: 3) Assess component selection and unmixing quality
End of explanation |
9,060 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classifying Default of Credit Card Clients
<hr>
The dataset can be downloaded from
Step1: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
Step2: Decision Tree accuracy and time elapsed caculation
Step3: cross validation for DT
Step4: Tuning our hyperparameters using GridSearch
Step5: Random Forest accuracy and time elapsed caculation
Step6: cross validation for RF
Step7: Receiver Operating Characteristic (ROC) curve
Step8: Tuning Models using GridSearch
Step9: OOB Errors for Random Forests
Step10: Naive Bayes accuracy and time elapsed caculation
Step11: cross-validation for NB
Step12: KNN accuracy and time elapsed caculation
Step13: cross validation for KNN
Step19: Ensemble Learning
Step20: Combining different algorithms for classification with majority vote
Step21: You may be wondering why we trained the logistic regression and k-nearest neighbors classifier as part of a pipeline. The reason behind it is that, both the
logistic regression and k-nearest neighbors algorithms (using the Euclidean distance metric) are not scale-invariant in contrast with decision trees.
Now let's move on to the more exciting part and combine the individual classifiers for majority rule voting in our MajorityVoteClassifier
Step22: Evaluating and tuning the ensemble classifier
Step23: Plot class probabilities calculated by the VotingClassifier
Step24: Bagging -- Building an ensemble of classifiers from bootstrap samples
Bagging is an ensemble learning technique that is closely related to the MajorityVoteClassifier,however, instead of using the same training set to fit the individual classifiers in the ensemble, we draw bootstrap samples (random samples with replacement) from the initial training set, which is why bagging is also known as bootstrap aggregating.
Step25: Leveraging weak learners via adaptive boosting
Step26: Two-class AdaBoost
Step27: Discrete versus Real AdaBoost
Step28: Feature transformations with ensembles of trees
Step31: PCA Decomposition
Step32: Optimization
Concatenating multiple feature extraction methods
Step33: Comparison of Calibration of Classifiers
Step35: Probability Calibration curves
Step36: Plot classification probability
Step37: Recursive feature elimination with cross-validation | Python Code:
import os
from sklearn.tree import DecisionTreeClassifier, export_graphviz
import pandas as pd
import numpy as np
from sklearn.cross_validation import train_test_split
from sklearn import cross_validation, metrics
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import BernoulliNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from time import time
from sklearn.pipeline import Pipeline
from sklearn.metrics import roc_auc_score , classification_report
from sklearn.grid_search import GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.metrics import precision_score, recall_score, accuracy_score, classification_report
from sklearn.preprocessing import StandardScaler
from sklearn.preprocessing import LabelEncoder
# read .csv from provided dataset
xls_filename="default of credit card clients.xls"
# df=pd.read_csv(csv_filename,index_col=0)
df=pd.read_excel(xls_filename, skiprows=1)
df.head()
df.columns
features=list(df.columns[1:-1])
X=df[features]
y = df['default payment next month']
# split dataset to 60% training and 40% testing
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.4, random_state=0)
print X_train.shape, y_train.shape
Explanation: Classifying Default of Credit Card Clients
<hr>
The dataset can be downloaded from : https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients
This dataset contains information on default payments, demographic factors, credit data, history of payment, and bill statements of credit card clients in Taiwan from April 2005 to September 2005.
Data Set Information:
This research that provided this dataset aimed at the case of customers default payments in Taiwan and compares the predictive accuracy of probability of default among six data mining methods. From the perspective of risk management, the result of predictive accuracy of the estimated probability of default will be more valuable than the binary result of classification - credible or not credible clients.
Attribute Information:
The dataset contains a binary variable, default payment (Yes = 1, No = 0), as the response variable. This study reviewed the literature and used the following 23 variables as explanatory variables:
<pre>
X1: Amount of the given credit (NT dollar): it includes both the individual consumer credit and his/her family (supplementary) credit.
X2: Gender (1 = male; 2 = female).
X3: Education (1 = graduate school; 2 = university; 3 = high school; 4 = others).
X4: Marital status (1 = married; 2 = single; 3 = others).
X5: Age (year).
X6 - X11: History of past payment. We tracked the past monthly payment records (from April to September, 2005) as follows: X6 = the repayment status in September, 2005; X7 = the repayment status in August, 2005; . . .;X11 = the repayment status in April, 2005. The measurement scale for the repayment status is: -1 = pay duly; 1 = payment delay for one month; 2 = payment delay for two months; . . .; 8 = payment delay for eight months; 9 = payment delay for nine months and above.
X12-X17: Amount of bill statement (NT dollar). X12 = amount of bill statement in September, 2005; X13 = amount of bill statement in August, 2005; . . .; X17 = amount of bill statement in April, 2005.
X18-X23: Amount of previous payment (NT dollar). X18 = amount paid in September, 2005; X19 = amount paid in August, 2005; . . .;X23 = amount paid in April, 2005.
</pre>
Optimization
Ensemble Learning
End of explanation
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
# Build a classification task using 3 informative features
# Build a forest and compute the feature importances
forest = ExtraTreesClassifier(n_estimators=250,
random_state=0)
forest.fit(X, y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure(num=None, figsize=(14, 10), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
importances[indices[:5]]
for f in range(5):
print("%d. feature %d - %s (%f)" % (f + 1, indices[f], features[indices[f]] ,importances[indices[f]]))
best_features = []
for i in indices[:5]:
best_features.append(features[i])
# Plot the top 5 feature importances of the forest
plt.figure(num=None, figsize=(8, 6), dpi=80, facecolor='w', edgecolor='k')
plt.title("Feature importances")
plt.bar(range(5), importances[indices][:5],
color="r", yerr=std[indices][:5], align="center")
plt.xticks(range(5), best_features)
plt.xlim([-1, 5])
plt.show()
Explanation: Feature importances with forests of trees
This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
End of explanation
t0=time()
print "DecisionTree"
#dt = DecisionTreeClassifier(min_samples_split=1,random_state=99)
dt = DecisionTreeClassifier(min_samples_split=20,max_depth=5,random_state=99)
clf_dt=dt.fit(X_train,y_train)
print "Acurracy: ", clf_dt.score(X_test,y_test)
t1=time()
print "time elapsed: ", t1-t0
Explanation: Decision Tree accuracy and time elapsed caculation
End of explanation
tt0=time()
print "cross result========"
scores = cross_validation.cross_val_score(dt, X,y, cv=5)
print scores
print scores.mean()
tt1=time()
print "time elapsed: ", tt1-tt0
print "\n"
Explanation: cross validation for DT
End of explanation
from sklearn.metrics import classification_report
pipeline = Pipeline([
('clf', DecisionTreeClassifier(criterion='entropy'))
])
parameters = {
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline, parameters, n_jobs=-1, verbose=1, scoring='f1')
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print classification_report(y_test, predictions)
Explanation: Tuning our hyperparameters using GridSearch
End of explanation
t2=time()
print "RandomForest"
rf = RandomForestClassifier(n_estimators=100,n_jobs=-1)
clf_rf = rf.fit(X_train,y_train)
print "Acurracy: ", clf_rf.score(X_test,y_test)
t3=time()
print "time elapsed: ", t3-t2
Explanation: Random Forest accuracy and time elapsed caculation
End of explanation
tt2=time()
print "cross result========"
scores = cross_validation.cross_val_score(rf, X,y, cv=5)
print scores
print scores.mean()
tt3=time()
print "time elapsed: ", tt3-tt2
print "\n"
Explanation: cross validation for RF
End of explanation
roc_auc_score(y_test,rf.predict(X_test))
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
predictions = rf.predict_proba(X_test)
false_positive_rate, recall, thresholds = roc_curve(y_test, predictions[:, 1])
roc_auc = auc(false_positive_rate, recall)
plt.title('Receiver Operating Characteristic')
plt.plot(false_positive_rate, recall, 'b', label='AUC = %0.2f' % roc_auc)
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1], 'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.ylabel('Recall')
plt.xlabel('Fall-out')
plt.show()
Explanation: Receiver Operating Characteristic (ROC) curve
End of explanation
pipeline2 = Pipeline([
('clf', RandomForestClassifier(criterion='entropy'))
])
parameters = {
'clf__n_estimators': (25, 50, 100),
'clf__max_depth': (5, 25 , 50),
'clf__min_samples_split': (1, 5, 10),
'clf__min_samples_leaf': (1, 2, 3)
}
grid_search = GridSearchCV(pipeline2, parameters, n_jobs=-1, verbose=1, scoring='accuracy', cv=3)
grid_search.fit(X_train, y_train)
print 'Best score: %0.3f' % grid_search.best_score_
print 'Best parameters set:'
best_parameters = grid_search.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid_search.predict(X_test)
print 'Accuracy:', accuracy_score(y_test, predictions)
print classification_report(y_test, predictions)
Explanation: Tuning Models using GridSearch
End of explanation
import matplotlib.pyplot as plt
from collections import OrderedDict
from sklearn.datasets import make_classification
from sklearn.ensemble import RandomForestClassifier, ExtraTreesClassifier
RANDOM_STATE = 123
# NOTE: Setting the `warm_start` construction parameter to `True` disables
# support for paralellised ensembles but is necessary for tracking the OOB
# error trajectory during training.
ensemble_clfs = [
("RandomForestClassifier, max_features='sqrt'",
RandomForestClassifier(warm_start=True, oob_score=True,
max_features="sqrt",
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features='log2'",
RandomForestClassifier(warm_start=True, max_features='log2',
oob_score=True,
random_state=RANDOM_STATE)),
("RandomForestClassifier, max_features=None",
RandomForestClassifier(warm_start=True, max_features=None,
oob_score=True,
random_state=RANDOM_STATE))
]
# Map a classifier name to a list of (<n_estimators>, <error rate>) pairs.
error_rate = OrderedDict((label, []) for label, _ in ensemble_clfs)
# Range of `n_estimators` values to explore.
min_estimators = 15
max_estimators = 175
for label, clf in ensemble_clfs:
for i in range(min_estimators, max_estimators + 1):
clf.set_params(n_estimators=i)
clf.fit(X, y)
# Record the OOB error for each `n_estimators=i` setting.
oob_error = 1 - clf.oob_score_
error_rate[label].append((i, oob_error))
# Generate the "OOB error rate" vs. "n_estimators" plot.
for label, clf_err in error_rate.items():
xs, ys = zip(*clf_err)
plt.plot(xs, ys, label=label)
plt.xlim(min_estimators, max_estimators)
plt.xlabel("n_estimators")
plt.ylabel("OOB error rate")
plt.legend(loc="upper right")
plt.show()
Explanation: OOB Errors for Random Forests
End of explanation
t4=time()
print "NaiveBayes"
nb = BernoulliNB()
clf_nb=nb.fit(X_train,y_train)
print "Acurracy: ", clf_nb.score(X_test,y_test)
t5=time()
print "time elapsed: ", t5-t4
Explanation: Naive Bayes accuracy and time elapsed caculation
End of explanation
tt4=time()
print "cross result========"
scores = cross_validation.cross_val_score(nb, X,y, cv=5)
print scores
print scores.mean()
tt5=time()
print "time elapsed: ", tt5-tt4
print "\n"
Explanation: cross-validation for NB
End of explanation
t6=time()
print "KNN"
# knn = KNeighborsClassifier(n_neighbors=3)
knn = KNeighborsClassifier()
clf_knn=knn.fit(X_train, y_train)
print "Acurracy: ", clf_knn.score(X_test,y_test)
t7=time()
print "time elapsed: ", t7-t6
Explanation: KNN accuracy and time elapsed caculation
End of explanation
tt6=time()
print "cross result========"
scores = cross_validation.cross_val_score(knn, X,y, cv=5)
print scores
print scores.mean()
tt7=time()
print "time elapsed: ", tt7-tt6
print "\n"
from sklearn.cross_validation import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn import grid_search
knn = KNeighborsClassifier()
parameters = {'n_neighbors': (10, 15, 25)}
grid = grid_search.GridSearchCV(knn, parameters, n_jobs=-1, verbose=1, scoring='accuracy')
grid.fit(X_train, y_train)
print 'Best score: %0.3f' % grid.best_score_
print 'Best parameters set:'
best_parameters = grid.best_estimator_.get_params()
for param_name in sorted(parameters.keys()):
print '\t%s: %r' % (param_name, best_parameters[param_name])
predictions = grid.predict(X_test)
print classification_report(y_test, predictions)
Explanation: cross validation for KNN
End of explanation
from sklearn.base import BaseEstimator
from sklearn.base import ClassifierMixin
from sklearn.preprocessing import LabelEncoder
from sklearn.externals import six
from sklearn.base import clone
from sklearn.pipeline import _name_estimators
import numpy as np
import operator
class MajorityVoteClassifier(BaseEstimator,
ClassifierMixin):
A majority vote ensemble classifier
Parameters
----------
classifiers : array-like, shape = [n_classifiers]
Different classifiers for the ensemble
vote : str, {'classlabel', 'probability'} (default='label')
If 'classlabel' the prediction is based on the argmax of
class labels. Else if 'probability', the argmax of
the sum of probabilities is used to predict the class label
(recommended for calibrated classifiers).
weights : array-like, shape = [n_classifiers], optional (default=None)
If a list of `int` or `float` values are provided, the classifiers
are weighted by importance; Uses uniform weights if `weights=None`.
def __init__(self, classifiers, vote='classlabel', weights=None):
self.classifiers = classifiers
self.named_classifiers = {key: value for key, value
in _name_estimators(classifiers)}
self.vote = vote
self.weights = weights
def fit(self, X, y):
Fit classifiers.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
y : array-like, shape = [n_samples]
Vector of target class labels.
Returns
-------
self : object
if self.vote not in ('probability', 'classlabel'):
raise ValueError("vote must be 'probability' or 'classlabel'"
"; got (vote=%r)"
% self.vote)
if self.weights and len(self.weights) != len(self.classifiers):
raise ValueError('Number of classifiers and weights must be equal'
'; got %d weights, %d classifiers'
% (len(self.weights), len(self.classifiers)))
# Use LabelEncoder to ensure class labels start with 0, which
# is important for np.argmax call in self.predict
self.lablenc_ = LabelEncoder()
self.lablenc_.fit(y)
self.classes_ = self.lablenc_.classes_
self.classifiers_ = []
for clf in self.classifiers:
fitted_clf = clone(clf).fit(X, self.lablenc_.transform(y))
self.classifiers_.append(fitted_clf)
return self
def predict(self, X):
Predict class labels for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Matrix of training samples.
Returns
----------
maj_vote : array-like, shape = [n_samples]
Predicted class labels.
if self.vote == 'probability':
maj_vote = np.argmax(self.predict_proba(X), axis=1)
else: # 'classlabel' vote
# Collect results from clf.predict calls
predictions = np.asarray([clf.predict(X)
for clf in self.classifiers_]).T
maj_vote = np.apply_along_axis(
lambda x:
np.argmax(np.bincount(x,
weights=self.weights)),
axis=1,
arr=predictions)
maj_vote = self.lablenc_.inverse_transform(maj_vote)
return maj_vote
def predict_proba(self, X):
Predict class probabilities for X.
Parameters
----------
X : {array-like, sparse matrix}, shape = [n_samples, n_features]
Training vectors, where n_samples is the number of samples and
n_features is the number of features.
Returns
----------
avg_proba : array-like, shape = [n_samples, n_classes]
Weighted average probability for each class per sample.
probas = np.asarray([clf.predict_proba(X)
for clf in self.classifiers_])
avg_proba = np.average(probas, axis=0, weights=self.weights)
return avg_proba
def get_params(self, deep=True):
Get classifier parameter names for GridSearch
if not deep:
return super(MajorityVoteClassifier, self).get_params(deep=False)
else:
out = self.named_classifiers.copy()
for name, step in six.iteritems(self.named_classifiers):
for key, value in six.iteritems(step.get_params(deep=True)):
out['%s__%s' % (name, key)] = value
return out
Explanation: Ensemble Learning
End of explanation
from sklearn.cross_validation import cross_val_score
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
import numpy as np
clf1 = LogisticRegression(penalty='l2',
C=0.001,
random_state=0)
clf2 = DecisionTreeClassifier(max_depth=5,
min_samples_leaf=5,
min_samples_split=1,
criterion='entropy',
random_state=0)
clf3 = KNeighborsClassifier(n_neighbors=1,
p=2,
metric='minkowski')
pipe1 = Pipeline([['sc', StandardScaler()],
['clf', clf1]])
pipe3 = Pipeline([['sc', StandardScaler()],
['clf', clf3]])
clf_labels = ['Logistic Regression', 'Decision Tree', 'KNN']
print('10-fold cross validation:\n')
for clf, label in zip([pipe1, clf2, pipe3], clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
Explanation: Combining different algorithms for classification with majority vote
End of explanation
# Majority Rule (hard) Voting
mv_clf = MajorityVoteClassifier(
classifiers=[pipe1, clf2,pipe3])
clf_labels = ['Logistic Regression', 'Decision Tree', 'K Nearest Neighbours', 'Majority Voting']
all_clf = [pipe1, clf2, pipe3, mv_clf]
for clf, label in zip(all_clf, clf_labels):
scores = cross_val_score(estimator=clf,
X=X_train,
y=y_train,
cv=10,
scoring='roc_auc')
print("ROC AUC: %0.2f (+/- %0.2f) [%s]"
% (scores.mean(), scores.std(), label))
Explanation: You may be wondering why we trained the logistic regression and k-nearest neighbors classifier as part of a pipeline. The reason behind it is that, both the
logistic regression and k-nearest neighbors algorithms (using the Euclidean distance metric) are not scale-invariant in contrast with decision trees.
Now let's move on to the more exciting part and combine the individual classifiers for majority rule voting in our MajorityVoteClassifier:
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve
from sklearn.metrics import auc
colors = ['black', 'orange', 'blue', 'green']
linestyles = [':', '--', '-.', '-']
for clf, label, clr, ls in zip(all_clf, clf_labels, colors, linestyles):
# assuming the label of the positive class is 1
y_pred = clf.fit(X_train,
y_train).predict_proba(X_test)[:, 1]
fpr, tpr, thresholds = roc_curve(y_true=y_test,
y_score=y_pred)
roc_auc = auc(x=fpr, y=tpr)
plt.plot(fpr, tpr,
color=clr,
linestyle=ls,
label='%s (auc = %0.2f)' % (label, roc_auc))
plt.legend(loc='lower right')
plt.plot([0, 1], [0, 1],
linestyle='--',
color='gray',
linewidth=2)
plt.xlim([-0.1, 1.1])
plt.ylim([-0.1, 1.1])
plt.grid()
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tight_layout()
# plt.savefig('./figures/roc.png', dpi=300)
plt.show()
Explanation: Evaluating and tuning the ensemble classifier
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import VotingClassifier
clf1 = LogisticRegression(random_state=123)
clf2 = RandomForestClassifier(random_state=123)
clf3 = GaussianNB()
eclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('gnb', clf3)],
voting='soft',
weights=[1, 1, 5])
# predict class probabilities for all classifiers
probas = [c.fit(X, y).predict_proba(X) for c in (clf1, clf2, clf3, eclf)]
# get class probabilities for the first sample in the dataset
class1_1 = [pr[0, 0] for pr in probas]
class2_1 = [pr[0, 1] for pr in probas]
# plotting
N = 4 # number of groups
ind = np.arange(N) # group positions
width = 0.35 # bar width
fig, ax = plt.subplots()
# bars for classifier 1-3
p1 = ax.bar(ind, np.hstack(([class1_1[:-1], [0]])), width, color='green')
p2 = ax.bar(ind + width, np.hstack(([class2_1[:-1], [0]])), width, color='lightgreen')
# bars for VotingClassifier
p3 = ax.bar(ind, [0, 0, 0, class1_1[-1]], width, color='blue')
p4 = ax.bar(ind + width, [0, 0, 0, class2_1[-1]], width, color='steelblue')
# plot annotations
plt.axvline(2.8, color='k', linestyle='dashed')
ax.set_xticks(ind + width)
ax.set_xticklabels(['LogisticRegression\nweight 1',
'GaussianNB\nweight 1',
'RandomForestClassifier\nweight 5',
'VotingClassifier\n(average probabilities)'],
rotation=40,
ha='right')
plt.ylim([0, 1])
plt.title('Class probabilities for sample 1 by different classifiers')
plt.legend([p1[0], p2[0]], ['class 1', 'class 2'], loc='upper left')
plt.show()
Explanation: Plot class probabilities calculated by the VotingClassifier
End of explanation
from sklearn.ensemble import BaggingClassifier
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=None)
bag = BaggingClassifier(base_estimator=tree,
n_estimators=500,
max_samples=1.0,
max_features=1.0,
bootstrap=True,
bootstrap_features=False,
n_jobs=-1,
random_state=1)
from sklearn.metrics import accuracy_score
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
bag = bag.fit(X_train, y_train)
y_train_pred = bag.predict(X_train)
y_test_pred = bag.predict(X_test)
bag_train = accuracy_score(y_train, y_train_pred)
bag_test = accuracy_score(y_test, y_test_pred)
print('Bagging train/test accuracies %.3f/%.3f'
% (bag_train, bag_test))
Explanation: Bagging -- Building an ensemble of classifiers from bootstrap samples
Bagging is an ensemble learning technique that is closely related to the MajorityVoteClassifier,however, instead of using the same training set to fit the individual classifiers in the ensemble, we draw bootstrap samples (random samples with replacement) from the initial training set, which is why bagging is also known as bootstrap aggregating.
End of explanation
from sklearn.ensemble import AdaBoostClassifier
tree = DecisionTreeClassifier(criterion='entropy',
max_depth=1)
ada = AdaBoostClassifier(base_estimator=tree,
n_estimators=500,
learning_rate=0.1,
random_state=0)
tree = tree.fit(X_train, y_train)
y_train_pred = tree.predict(X_train)
y_test_pred = tree.predict(X_test)
tree_train = accuracy_score(y_train, y_train_pred)
tree_test = accuracy_score(y_test, y_test_pred)
print('Decision tree train/test accuracies %.3f/%.3f'
% (tree_train, tree_test))
ada = ada.fit(X_train, y_train)
y_train_pred = ada.predict(X_train)
y_test_pred = ada.predict(X_test)
ada_train = accuracy_score(y_train, y_train_pred)
ada_test = accuracy_score(y_test, y_test_pred)
print('AdaBoost train/test accuracies %.3f/%.3f'
% (ada_train, ada_test))
Explanation: Leveraging weak learners via adaptive boosting
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.datasets import make_gaussian_quantiles
# Create and fit an AdaBoosted decision tree
bdt = AdaBoostClassifier(DecisionTreeClassifier(max_depth=5),
algorithm="SAMME",
n_estimators=200)
bdt.fit(X, y)
plot_colors = "br"
plot_step = 0.02
class_names = "AB"
plt.figure(figsize=(10, 5))
# Plot the decision boundaries
plt.subplot(121)
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = bdt.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=plt.cm.Paired)
plt.axis("tight")
# Plot the training points
for i, n, c in zip(range(2), class_names, plot_colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1],
c=c, cmap=plt.cm.Paired,
label="Class %s" % n)
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='upper right')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Decision Boundary')
# Plot the two-class decision scores
twoclass_output = bdt.decision_function(X)
plot_range = (twoclass_output.min(), twoclass_output.max())
plt.subplot(122)
for i, n, c in zip(range(2), class_names, plot_colors):
plt.hist(twoclass_output[y == i],
bins=10,
range=plot_range,
facecolor=c,
label='Class %s' % n,
alpha=.5)
x1, x2, y1, y2 = plt.axis()
plt.axis((x1, x2, y1, y2 * 1.2))
plt.legend(loc='upper right')
plt.ylabel('Samples')
plt.xlabel('Score')
plt.title('Decision Scores')
plt.tight_layout()
plt.subplots_adjust(wspace=0.35)
plt.show()
Explanation: Two-class AdaBoost
End of explanation
import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import zero_one_loss
from sklearn.ensemble import AdaBoostClassifier
n_estimators = 400
# A learning rate of 1. may not be optimal for both SAMME and SAMME.R
learning_rate = 1.
dt_stump = DecisionTreeClassifier(max_depth=1, min_samples_leaf=1)
dt_stump.fit(X_train, y_train)
dt_stump_err = 1.0 - dt_stump.score(X_test, y_test)
dt = DecisionTreeClassifier(max_depth=9, min_samples_leaf=1)
dt.fit(X_train, y_train)
dt_err = 1.0 - dt.score(X_test, y_test)
ada_discrete = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME")
ada_discrete.fit(X_train, y_train)
ada_real = AdaBoostClassifier(
base_estimator=dt_stump,
learning_rate=learning_rate,
n_estimators=n_estimators,
algorithm="SAMME.R")
ada_real.fit(X_train, y_train)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot([1, n_estimators], [dt_stump_err] * 2, 'k-',
label='Decision Stump Error')
ax.plot([1, n_estimators], [dt_err] * 2, 'k--',
label='Decision Tree Error')
ada_discrete_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_test)):
ada_discrete_err[i] = zero_one_loss(y_pred, y_test)
ada_discrete_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_discrete.staged_predict(X_train)):
ada_discrete_err_train[i] = zero_one_loss(y_pred, y_train)
ada_real_err = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_test)):
ada_real_err[i] = zero_one_loss(y_pred, y_test)
ada_real_err_train = np.zeros((n_estimators,))
for i, y_pred in enumerate(ada_real.staged_predict(X_train)):
ada_real_err_train[i] = zero_one_loss(y_pred, y_train)
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err,
label='Discrete AdaBoost Test Error',
color='red')
ax.plot(np.arange(n_estimators) + 1, ada_discrete_err_train,
label='Discrete AdaBoost Train Error',
color='blue')
ax.plot(np.arange(n_estimators) + 1, ada_real_err,
label='Real AdaBoost Test Error',
color='orange')
ax.plot(np.arange(n_estimators) + 1, ada_real_err_train,
label='Real AdaBoost Train Error',
color='green')
ax.set_ylim((0.0, 0.5))
ax.set_xlabel('n_estimators')
ax.set_ylabel('error rate')
leg = ax.legend(loc='upper right', fancybox=True)
leg.get_frame().set_alpha(0.7)
plt.show()
Explanation: Discrete versus Real AdaBoost
End of explanation
import numpy as np
np.random.seed(10)
import matplotlib.pyplot as plt
from sklearn.datasets import make_classification
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import (RandomTreesEmbedding, RandomForestClassifier,
GradientBoostingClassifier)
from sklearn.preprocessing import OneHotEncoder
from sklearn.cross_validation import train_test_split
from sklearn.metrics import roc_curve
from sklearn.pipeline import make_pipeline
n_estimator = 10
# It is important to train the ensemble of trees on a different subset
# of the training data than the linear regression model to avoid
# overfitting, in particular if the total number of leaves is
# similar to the number of training samples
X_train, X_train_lr, y_train, y_train_lr = train_test_split(X_train,
y_train,
test_size=0.5)
# Unsupervised transformation based on totally random trees
rt = RandomTreesEmbedding(max_depth=3, n_estimators=n_estimator,
random_state=0)
rt_lm = LogisticRegression()
pipeline = make_pipeline(rt, rt_lm)
pipeline.fit(X_train, y_train)
y_pred_rt = pipeline.predict_proba(X_test)[:, 1]
fpr_rt_lm, tpr_rt_lm, _ = roc_curve(y_test, y_pred_rt)
# Supervised transformation based on random forests
rf = RandomForestClassifier(max_depth=3, n_estimators=n_estimator)
rf_enc = OneHotEncoder()
rf_lm = LogisticRegression()
rf.fit(X_train, y_train)
rf_enc.fit(rf.apply(X_train))
rf_lm.fit(rf_enc.transform(rf.apply(X_train_lr)), y_train_lr)
y_pred_rf_lm = rf_lm.predict_proba(rf_enc.transform(rf.apply(X_test)))[:, 1]
fpr_rf_lm, tpr_rf_lm, _ = roc_curve(y_test, y_pred_rf_lm)
grd = GradientBoostingClassifier(n_estimators=n_estimator)
grd_enc = OneHotEncoder()
grd_lm = LogisticRegression()
grd.fit(X_train, y_train)
grd_enc.fit(grd.apply(X_train)[:, :, 0])
grd_lm.fit(grd_enc.transform(grd.apply(X_train_lr)[:, :, 0]), y_train_lr)
y_pred_grd_lm = grd_lm.predict_proba(
grd_enc.transform(grd.apply(X_test)[:, :, 0]))[:, 1]
fpr_grd_lm, tpr_grd_lm, _ = roc_curve(y_test, y_pred_grd_lm)
# The gradient boosted model by itself
y_pred_grd = grd.predict_proba(X_test)[:, 1]
fpr_grd, tpr_grd, _ = roc_curve(y_test, y_pred_grd)
# The random forest model by itself
y_pred_rf = rf.predict_proba(X_test)[:, 1]
fpr_rf, tpr_rf, _ = roc_curve(y_test, y_pred_rf)
plt.figure(1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rt_lm, tpr_rt_lm, label='RT + LR')
plt.plot(fpr_rf, tpr_rf, label='RF')
plt.plot(fpr_rf_lm, tpr_rf_lm, label='RF + LR')
plt.plot(fpr_grd, tpr_grd, label='GBT')
plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT + LR')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve')
plt.legend(loc='best')
plt.show()
plt.figure(2)
plt.xlim(0, 0.2)
plt.ylim(0.8, 1)
plt.plot([0, 1], [0, 1], 'k--')
plt.plot(fpr_rt_lm, tpr_rt_lm, label='RT + LR')
plt.plot(fpr_rf, tpr_rf, label='RF')
plt.plot(fpr_rf_lm, tpr_rf_lm, label='RF + LR')
plt.plot(fpr_grd, tpr_grd, label='GBT')
plt.plot(fpr_grd_lm, tpr_grd_lm, label='GBT + LR')
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.title('ROC curve (zoomed in at top left)')
plt.legend(loc='best')
plt.show()
Explanation: Feature transformations with ensembles of trees
End of explanation
target_names = ['Shares > 1400' , 'Shares < 1400']
X.values
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
pca = PCA(n_components=2)
reduced_X = pca.fit_transform(X)
for a in [red_x, red_y,blue_x,blue_y]:
print len(a)
red_x, red_y = [], []
blue_x, blue_y = [], []
for i in range(len(reduced_X)):
if y[i] == 0:
red_x.append(reduced_X[i][0])
red_y.append(reduced_X[i][1])
elif y[i] == 1:
blue_x.append(reduced_X[i][0])
blue_y.append(reduced_X[i][1])
plt.scatter(red_x, red_y, c='r', marker='x')
plt.scatter(blue_x, blue_y, c='b', marker='.')
plt.show()
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
pca = PCA(n_components=2)
X_r = pca.fit(X.values).transform(X.values)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X.values, y.values).transform(X.values)
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
lda = LinearDiscriminantAnalysis(n_components=2)
X_r2 = lda.fit(X, y).transform(X)
# Percentage of variance explained for each components
print('explained variance ratio (first two components): %s'
% str(pca.explained_variance_ratio_))
plt.figure()
colors = ['blue','red']
for i in xrange(len(colors)):
px = X_r[:, 0][y == i]
py = X_r[:, 1][y == i]
plt.scatter(px, py, c=colors[i])
plt.legend(target_names)
plt.title('PCA')
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
plt.figure()
colors = ['blue','red']
for i in xrange(len(colors)):
px = X_r2[:, 0][y == i]
py = X_pca[:, 1][y == i]
plt.scatter(px, py, c=colors[i])
plt.legend(target_names)
plt.title('LDA')
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
plt.show()
for c, i, target_name in zip("rb", [0, 1], target_names):
plt.scatter(X_r[y == i, 0], X_r[y == i, 1], c=c, label=target_name)
plt.legend()
plt.title('PCA')
plt.figure()
for c, i, target_name in zip("rb", [0, 1], target_names):
plt.scatter(X_r2[y == i, 0], X_r2[y == i, 1], c=c, label=target_name)
plt.legend()
plt.title('LDA')
plt.show()
plt.figure()
def plot_pca_scatter():
colors = ['blue','red']
for i in xrange(len(colors)):
px = X_pca[:, 0][y == i]
py = X_pca[:, 1][y == i]
plt.scatter(px, py, c=colors[i])
plt.legend(target_names)
plt.xlabel('First Principal Component')
plt.ylabel('Second Principal Component')
from sklearn.decomposition import PCA
estimator = PCA(n_components=2)
X_pca = estimator.fit_transform(X.values)
plot_pca_scatter() # Note that we only plot the first and second principal component
plt.figure(figsize=(2. * n_col, 2.26 * n_row))
for i, comp in enumerate(images):
plt.subplot(n_row, n_col, i + 1)
plt.imshow(comp.reshape((8, 8)), interpolation='nearest')
plt.text(0, -1, str(i + 1) + '-component')
plt.xticks(())
plt.yticks(())
Explanation: PCA Decomposition
End of explanation
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.grid_search import GridSearchCV
from sklearn.decomposition import PCA
from sklearn.feature_selection import SelectKBest
# This dataset is way to high-dimensional. Better do PCA:
pca = PCA(n_components=2)
# Maybe some original features where good, too?
selection = SelectKBest(k=1)
# Build estimator from PCA and Univariate selection:
combined_features = FeatureUnion([("pca", pca), ("univ_select", selection)])
# Use combined features to transform dataset:
X_features = combined_features.fit(X, y).transform(X)
dt = DecisionTreeClassifier(min_samples_split=1,max_depth=5,min_samples_leaf=5,random_state=99)
# Do grid search over k, n_components and max_depth:
pipeline = Pipeline([("features", combined_features), ("dt", dt)])
param_grid = dict(features__pca__n_components=[1, 2, 3],
features__univ_select__k=[1, 2],
dt__max_depth=[3, 5, 7])
grid_search = GridSearchCV(pipeline, param_grid=param_grid, verbose=10)
grid_search.fit(X, y)
print(grid_search.best_estimator_)
print(grid_search.best_score_)
Explanation: Optimization
Concatenating multiple feature extraction methods
End of explanation
import numpy as np
np.random.seed(0)
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.calibration import calibration_curve
# Create classifiers
lr = LogisticRegression()
gnb = GaussianNB()
knn = KNeighborsClassifier(n_neighbors=25)
rfc = RandomForestClassifier(n_estimators=100)
###############################################################################
# Plot calibration plots
plt.figure(figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(gnb, 'Naive Bayes'),
(knn, 'K Neighbors Classifier'),
(rfc, 'Random Forest')]:
clf.fit(X_train, y_train)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s" % (name, ))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
plt.show()
Explanation: Comparison of Calibration of Classifiers
End of explanation
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.naive_bayes import GaussianNB
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import (brier_score_loss, precision_score, recall_score,
f1_score)
from sklearn.calibration import CalibratedClassifierCV, calibration_curve
from sklearn.cross_validation import train_test_split
def plot_calibration_curve(est, name, fig_index):
Plot calibration curve for est w/o and with calibration.
# Calibrated with isotonic calibration
isotonic = CalibratedClassifierCV(est, cv=2, method='isotonic')
# Calibrated with sigmoid calibration
sigmoid = CalibratedClassifierCV(est, cv=2, method='sigmoid')
# Logistic regression with no calibration as baseline
lr = LogisticRegression(C=1., solver='lbfgs')
fig = plt.figure(fig_index, figsize=(10, 10))
ax1 = plt.subplot2grid((3, 1), (0, 0), rowspan=2)
ax2 = plt.subplot2grid((3, 1), (2, 0))
ax1.plot([0, 1], [0, 1], "k:", label="Perfectly calibrated")
for clf, name in [(lr, 'Logistic'),
(est, name),
(isotonic, name + ' + Isotonic'),
(sigmoid, name + ' + Sigmoid')]:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
if hasattr(clf, "predict_proba"):
prob_pos = clf.predict_proba(X_test)[:, 1]
else: # use decision function
prob_pos = clf.decision_function(X_test)
prob_pos = \
(prob_pos - prob_pos.min()) / (prob_pos.max() - prob_pos.min())
clf_score = brier_score_loss(y_test, prob_pos, pos_label=y.max())
print("%s:" % name)
print("\tBrier: %1.3f" % (clf_score))
print("\tPrecision: %1.3f" % precision_score(y_test, y_pred))
print("\tRecall: %1.3f" % recall_score(y_test, y_pred))
print("\tF1: %1.3f\n" % f1_score(y_test, y_pred))
fraction_of_positives, mean_predicted_value = \
calibration_curve(y_test, prob_pos, n_bins=10)
ax1.plot(mean_predicted_value, fraction_of_positives, "s-",
label="%s (%1.3f)" % (name, clf_score))
ax2.hist(prob_pos, range=(0, 1), bins=10, label=name,
histtype="step", lw=2)
ax1.set_ylabel("Fraction of positives")
ax1.set_ylim([-0.05, 1.05])
ax1.legend(loc="lower right")
ax1.set_title('Calibration plots (reliability curve)')
ax2.set_xlabel("Mean predicted value")
ax2.set_ylabel("Count")
ax2.legend(loc="upper center", ncol=2)
plt.tight_layout()
# Plot calibration cuve for Gaussian Naive Bayes
plot_calibration_curve(GaussianNB(), "Naive Bayes", 1)
# Plot calibration cuve for Linear SVC
plot_calibration_curve(DecisionTreeClassifier(), "Decision Tree", 2)
plt.show()
Explanation: Probability Calibration curves
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
#PCA
pca = PCA(n_components=2)
X_r = pca.fit(X).transform(X)
n_features = X_r.shape[1]
C = 1.0
# Create different classifiers. The logistic regression cannot do
# multiclass out of the box.
classifiers = {'L1 logistic': LogisticRegression(C=C, penalty='l1'),
'L2 logistic': LogisticRegression(C=C, penalty='l2'),
'Decision Tree': DecisionTreeClassifier(max_depth=5,min_samples_leaf=5,min_samples_split=1,random_state=99),
'K Nearest Neighbors': KNeighborsClassifier(n_neighbors=3),
'Random Forest' : RandomForestClassifier(max_depth=25,min_samples_leaf=2,
min_samples_split=10,n_estimators=100,n_jobs=-1)
}
n_classifiers = len(classifiers)
plt.figure(figsize=(2 * 2, n_classifiers * 2))
plt.subplots_adjust(bottom=.2, top=.95)
xx = np.linspace(3, 9, 100)
yy = np.linspace(1, 5, 100).T
xx, yy = np.meshgrid(xx, yy)
Xfull = np.c_[xx.ravel(), yy.ravel()]
for index, (name, classifier) in enumerate(classifiers.items()):
classifier.fit(X_r, y)
y_pred = classifier.predict(X_r)
classif_rate = np.mean(y_pred.ravel() == y.ravel()) * 100
print("classif_rate for %s : %f " % (name, classif_rate))
# View probabilities=
probas = classifier.predict_proba(Xfull)
n_classes = np.unique(y_pred).size
for k in range(n_classes):
plt.subplot(n_classifiers, n_classes, index * n_classes + k + 1)
plt.title("Class %d" % k)
if k == 0:
plt.ylabel(name)
imshow_handle = plt.imshow(probas[:, k].reshape((100, 100)),
extent=(3, 9, 1, 5), origin='lower')
plt.xticks(())
plt.yticks(())
idx = (y_pred == k)
if idx.any():
plt.scatter(X_r[idx, 0], X_r[idx, 1], marker='o', c='k')
ax = plt.axes([0.15, 0.04, 0.7, 0.05])
plt.title("Probability")
plt.colorbar(imshow_handle, cax=ax, orientation='horizontal')
plt.show()
Explanation: Plot classification probability
End of explanation
import matplotlib.pyplot as plt
from sklearn.cross_validation import StratifiedKFold
from sklearn.feature_selection import RFECV
from sklearn.datasets import make_classification
# Create the RFE object and compute a cross-validated score.
rf = RandomForestClassifier(max_depth=25,min_samples_leaf=2,min_samples_split=10,n_estimators=100,n_jobs=-1)
# The "accuracy" scoring is proportional to the number of correct classifications
rfecv = RFECV(estimator=rf, step=1, cv=StratifiedKFold(y, 2),
scoring='accuracy')
rfecv.fit(X, y)
print("Optimal number of features : %d" % rfecv.n_features_)
# Plot number of features VS. cross-validation scores
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Cross validation score (nb of correct classifications)")
plt.plot(range(1, len(rfecv.grid_scores_) + 1), rfecv.grid_scores_)
plt.show()
Explanation: Recursive feature elimination with cross-validation
End of explanation |
9,061 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Quiz 1 - Number of rainy days
Step3: count(*)
0 10
Quiz 2 - Temp on Foggy and Nonfoggy Days
Step5: fog max(maxtempi)
0 0 86
1 1 81
Quiz 3 - Mean Temp on Weekends
Step7: More about SQL's CAST function
Step8: Quiz 5 - Fixing Turnstile Data
Step9: updated_turnstile_110528.txt
A002,R051,02-00-00,05-21-11,00
Step10: C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn
A002,R051,02-00-00,05-21-11,00
Step11: More detail
Step12: 9 - Get Hourly Exits
Step13: Unnamed
Step14: Unnamed | Python Code:
import pandas
import pandasql
def num_rainy_days(filename):
'''
This function should run a SQL query on a dataframe of
weather data.
The SQL query should return:
- one column and
- one row - a count of the `number of days` in the dataframe where
the rain column is equal to 1 (i.e., the number of days it
rained).
The dataframe will be titled 'weather_data'.
You'll need to provide the SQL query.
You might find SQL's count function useful for this exercise.
You can read more about it here:
https://dev.mysql.com/doc/refman/5.1/en/counting-rows.html
You might also find that interpreting numbers as integers or
floats may not work initially.
In order to get around this issue, it may be useful to cast
these numbers as integers. This can be done by writing cast(column as integer). So for example, if we wanted
to cast the maxtempi column as an integer, we would actually write
something like where cast(maxtempi as integer) = 76, as opposed to
simply where maxtempi = 76.
You can see the weather data that we are passing in below:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv
'''
weather_data = pandas.read_csv(filename)
q =
your query here
SELECT COUNT(*) FROM weather_data WHERE rain = 1;
#Execute your SQL command against the pandas frame
rainy_days = pandasql.sqldf(q.lower(), locals())
return rainy_days
Explanation: Quiz 1 - Number of rainy days
End of explanation
import pandas
import pandasql
def max_temp_aggregate_by_fog(filename):
'''
This function should run a SQL query on a dataframe of
weather data. The SQL query should return two columns and
two rows - whether it was foggy or not (0 or 1) and the max
maxtempi for that fog value (i.e., the maximum max temperature
for both foggy and non-foggy days). The dataframe will be
titled 'weather_data'. You'll need to provide the SQL query.
You might also find that interpreting numbers as integers or floats may not
work initially. In order to get around this issue, it may be useful to cast
these numbers as integers. This can be done by writing cast(column as integer).
So for example, if we wanted to cast the maxtempi column as an integer, we would actually
write something like where cast(maxtempi as integer) = 76, as opposed to simply
where maxtempi = 76.
You can see the weather data that we are passing in below:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv
'''
weather_data = pandas.read_csv(filename)
q =
SELECT fog, MAX(maxtempi)
FROM weather_data
GROUP BY fog;
#Execute your SQL command against the pandas frame
foggy_days = pandasql.sqldf(q.lower(), locals())
return foggy_days
Explanation: count(*)
0 10
Quiz 2 - Temp on Foggy and Nonfoggy Days
End of explanation
import pandas
import pandasql
def avg_weekend_temperature(filename):
'''
This function should run a SQL query on a dataframe of
weather data. The SQL query should return one column and
one row - the average meantempi on days that are a Saturday
or Sunday (i.e., the the average mean temperature on weekends).
The dataframe will be titled 'weather_data' and you can access
the date in the dataframe via the 'date' column.
You'll need to provide the SQL query.
You might also find that interpreting numbers as integers or floats may not
work initially. In order to get around this issue, it may be useful to cast
these numbers as integers. This can be done by writing cast(column as integer).
So for example, if we wanted to cast the maxtempi column as an integer, we would actually
write something like where cast(maxtempi as integer) = 76, as opposed to simply
where maxtempi = 76.
Also, you can convert dates to days of the week via the 'strftime' keyword in SQL.
For example, cast (strftime('%w', date) as integer) will return 0 if the date
is a Sunday or 6 if the date is a Saturday.
You can see the weather data that we are passing in below:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv
'''
weather_data = pandas.read_csv(filename)
q =
SELECT AVG(CAST(meantempi AS int))
FROM weather_data
WHERE CAST(strftime('%w', date) AS int) = 0 or CAST(strftime('%w', date) AS int) = 6;
#Execute your SQL command against the pandas frame
mean_temp_weekends = pandasql.sqldf(q.lower(), locals())
return mean_temp_weekends
Explanation: fog max(maxtempi)
0 0 86
1 1 81
Quiz 3 - Mean Temp on Weekends
End of explanation
import pandas
import pandasql
def avg_min_temperature(filename):
'''
This function should run a SQL query on a dataframe of
weather data. More specifically you want to find the average
minimum temperature (mintempi column of the weather dataframe) on
rainy days where the minimum temperature is greater than 55 degrees.
You might also find that interpreting numbers as integers or floats may not
work initially. In order to get around this issue, it may be useful to cast
these numbers as integers. This can be done by writing cast(column as integer).
So for example, if we wanted to cast the maxtempi column as an integer, we would actually
write something like where cast(maxtempi as integer) = 76, as opposed to simply
where maxtempi = 76.
You can see the weather data that we are passing in below:
https://s3.amazonaws.com/content.udacity-data.com/courses/ud359/weather_underground.csv
'''
weather_data = pandas.read_csv(filename)
q =
SELECT AVG(CAST (mintempi AS int))
FROM weather_data
WHERE rain = 1 and CAST(MINTEMPI AS int) > 55;
#Execute your SQL command against the pandas frame
avg_min_temp_rainy = pandasql.sqldf(q.lower(), locals())
return avg_min_temp_rainy
Explanation: More about SQL's CAST function:
1. https://docs.microsoft.com/en-us/sql/t-sql/functions/cast-and-convert-transact-sql
2. https://www.w3schools.com/sql/func_sqlserver_cast.asp
strftime function:
1. https://www.techonthenet.com/sqlite/functions/strftime.php
Quiz 4 - Mean Temp on Rainy Days
End of explanation
import csv
def fix_turnstile_data(filenames):
'''
Filenames is a list of MTA Subway turnstile text files. A link to an example
MTA Subway turnstile text file can be seen at the URL below:
http://web.mta.info/developers/data/nyct/turnstile/turnstile_110507.txt
As you can see, there are numerous data points included in each row of the
a MTA Subway turnstile text file.
You want to write a function that will update each row in the text
file so there is only one entry per row. A few examples below:
A002,R051,02-00-00,05-28-11,00:00:00,REGULAR,003178521,001100739
A002,R051,02-00-00,05-28-11,04:00:00,REGULAR,003178541,001100746
A002,R051,02-00-00,05-28-11,08:00:00,REGULAR,003178559,001100775
Write the updates to a different text file in the format of "updated_" + filename.
For example:
1) if you read in a text file called "turnstile_110521.txt"
2) you should write the updated data to "updated_turnstile_110521.txt"
The order of the fields should be preserved. Remember to read through the
Instructor Notes below for more details on the task.
In addition, here is a CSV reader/writer introductory tutorial:
http://goo.gl/HBbvyy
You can see a sample of the turnstile text file that's passed into this function
and the the corresponding updated file by downloading these files from the resources:
Sample input file: turnstile_110528.txt
Sample updated file: solution_turnstile_110528.txt
'''
for name in filenames:
# create file input object `f_in` to work with "name" file.
# create file output object `f_out` to write to the new "updated_name" file.
with open(name, 'r') as f_in, open(''.join(['updated_',name]), 'w') as f_out:
# creater csv readers and writers based on our file objects
reader_in = csv.reader(f_in)
writer_out = csv.writer(f_out)
# Our reader in allows us to go through each line (row of the input data)
# and access its data with the standard Python syntax.
for row in reader_in:
for i in range(3, len(row), 5):
writer_out.writerow(row[0:3] + row[i:i+5])
return None
Explanation: Quiz 5 - Fixing Turnstile Data
End of explanation
def create_master_turnstile_file(filenames, output_file):
'''
Write a function that
- takes the files in the list filenames, which all have the
columns 'C/A, UNIT, SCP, DATEn, TIMEn, DESCn, ENTRIESn, EXITSn', and
- consolidates them into one file located at output_file.
- There should be ONE row with the column headers, located at the top of the file.
- The input files do not have column header rows of their own.
For example, if file_1 has:
line 1 ...
line 2 ...
and another file, file_2 has:
line 3 ...
line 4 ...
line 5 ...
We need to combine file_1 and file_2 into a master_file like below:
'C/A, UNIT, SCP, DATEn, TIMEn, DESCn, ENTRIESn, EXITSn'
line 1 ...
line 2 ...
line 3 ...
line 4 ...
line 5 ...
'''
with open(output_file, 'w') as master_file:
master_file.write('C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn\n')
for filename in filenames:
with open(filename, 'r') as content:
# Write everything read from `content` ( which is rows) to file `master_file`
master_file.write(content.read())
return None
Explanation: updated_turnstile_110528.txt
A002,R051,02-00-00,05-21-11,00:00:00,REGULAR,003169391,001097585
A002,R051,02-00-00,05-21-11,04:00:00,REGULAR,003169415,001097588
A002,R051,02-00-00,05-21-11,08:00:00,REGULAR,003169431,001097607
A002,R051,02-00-00,05-21-11,12:00:00,REGULAR,003169506,001097686
A002,R051,02-00-00,05-21-11,16:00:00,REGULAR,003169693,001097734
...
Quiz 6 - Combining Turnstile Data
End of explanation
import pandas
def filter_by_regular(filename):
'''
This function should read the csv file located at filename into a pandas dataframe,
and filter the dataframe to only rows where the 'DESCn' column has the value 'REGULAR'.
For example, if the pandas dataframe is as follows:
,C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn
0,A002,R051,02-00-00,05-01-11,00:00:00,REGULAR,3144312,1088151
1,A002,R051,02-00-00,05-01-11,04:00:00,DOOR,3144335,1088159
2,A002,R051,02-00-00,05-01-11,08:00:00,REGULAR,3144353,1088177
3,A002,R051,02-00-00,05-01-11,12:00:00,DOOR,3144424,1088231
The dataframe will look like below after filtering to only rows where DESCn column
has the value 'REGULAR':
0,A002,R051,02-00-00,05-01-11,00:00:00,REGULAR,3144312,1088151
2,A002,R051,02-00-00,05-01-11,08:00:00,REGULAR,3144353,1088177
'''
# Use pandas's read_csv function to read the csv file located at filename
turnstile_data = pandas.read_csv(filename)
# Use pandas's loc() function
turnstile_data = turnstile_data.loc[turnstile_data['DESCn'] == 'REGULAR']
return turnstile_data
Explanation: C/A,UNIT,SCP,DATEn,TIMEn,DESCn,ENTRIESn,EXITSn
A002,R051,02-00-00,05-21-11,00:00:00,REGULAR,003169391,001097585
A002,R051,02-00-00,05-21-11,04:00:00,REGULAR,003169415,001097588
A002,R051,02-00-00,05-21-11,08:00:00,REGULAR,003169431,001097607
A002,R051,02-00-00,05-21-11,12:00:00,REGULAR,003169506,001097686
...
Quiz 7 - Filtering Irregular Data
End of explanation
import pandas
def get_hourly_entries(df):
'''
The data in the MTA Subway Turnstile data reports on the cumulative
number of entries and exits per row. Assume that you have a dataframe
called df that contains only the rows for a particular turnstile machine
(i.e., unique SCP, C/A, and UNIT). This function should change
these cumulative entry numbers to a count of entries since the last reading
(i.e., entries since the last row in the dataframe).
More specifically, you want to do two things:
1) Create a new column called ENTRIESn_hourly
2) Assign to the column the difference between ENTRIESn of the current row
and the previous row. If there is any NaN, fill/replace it with 1.
You may find the pandas functions shift() and fillna() to be helpful in this exercise.
Examples of what your dataframe should look like at the end of this exercise:
C/A UNIT SCP DATEn TIMEn DESCn ENTRIESn EXITSn ENTRIESn_hourly
0 A002 R051 02-00-00 05-01-11 00:00:00 REGULAR 3144312 1088151 1
1 A002 R051 02-00-00 05-01-11 04:00:00 REGULAR 3144335 1088159 23
2 A002 R051 02-00-00 05-01-11 08:00:00 REGULAR 3144353 1088177 18
3 A002 R051 02-00-00 05-01-11 12:00:00 REGULAR 3144424 1088231 71
4 A002 R051 02-00-00 05-01-11 16:00:00 REGULAR 3144594 1088275 170
5 A002 R051 02-00-00 05-01-11 20:00:00 REGULAR 3144808 1088317 214
6 A002 R051 02-00-00 05-02-11 00:00:00 REGULAR 3144895 1088328 87
7 A002 R051 02-00-00 05-02-11 04:00:00 REGULAR 3144905 1088331 10
8 A002 R051 02-00-00 05-02-11 08:00:00 REGULAR 3144941 1088420 36
9 A002 R051 02-00-00 05-02-11 12:00:00 REGULAR 3145094 1088753 153
10 A002 R051 02-00-00 05-02-11 16:00:00 REGULAR 3145337 1088823 243
...
...
'''
# Actually you should use diff() function rather than shift(),
# shift() will return the previous value, not the difference between two value.
df['ENTRIESn_hourly'] = df['ENTRIESn'].diff().fillna(1)
return df
Explanation: More detail:
loc() function which is purely label-location based indexer for selection by label
- https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html
- More about selection by label:
- https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-label
Quiz 8 - Get Hourly Entries
End of explanation
import pandas
def get_hourly_exits(df):
'''
The data in the MTA Subway Turnstile data reports on the cumulative
number of entries and exits per row. Assume that you have a dataframe
called df that contains only the rows for a particular turnstile machine
(i.e., unique SCP, C/A, and UNIT). This function should change
these cumulative exit numbers to a count of exits since the last reading
(i.e., exits since the last row in the dataframe).
More specifically, you want to do two things:
1) Create a new column called EXITSn_hourly
2) Assign to the column the difference between EXITSn of the current row
and the previous row. If there is any NaN, fill/replace it with 0.
You may find the pandas functions shift() and fillna() to be helpful in this exercise.
Example dataframe below:
Unnamed: 0 C/A UNIT SCP DATEn TIMEn DESCn ENTRIESn EXITSn ENTRIESn_hourly EXITSn_hourly
0 0 A002 R051 02-00-00 05-01-11 00:00:00 REGULAR 3144312 1088151 0 0
1 1 A002 R051 02-00-00 05-01-11 04:00:00 REGULAR 3144335 1088159 23 8
2 2 A002 R051 02-00-00 05-01-11 08:00:00 REGULAR 3144353 1088177 18 18
3 3 A002 R051 02-00-00 05-01-11 12:00:00 REGULAR 3144424 1088231 71 54
4 4 A002 R051 02-00-00 05-01-11 16:00:00 REGULAR 3144594 1088275 170 44
5 5 A002 R051 02-00-00 05-01-11 20:00:00 REGULAR 3144808 1088317 214 42
6 6 A002 R051 02-00-00 05-02-11 00:00:00 REGULAR 3144895 1088328 87 11
7 7 A002 R051 02-00-00 05-02-11 04:00:00 REGULAR 3144905 1088331 10 3
8 8 A002 R051 02-00-00 05-02-11 08:00:00 REGULAR 3144941 1088420 36 89
9 9 A002 R051 02-00-00 05-02-11 12:00:00 REGULAR 3145094 1088753 153 333
'''
df['EXITSn_hourly'] = df['EXITSn'].diff().fillna(0)
return df
Explanation: 9 - Get Hourly Exits
End of explanation
import pandas
def time_to_hour(time):
'''
Given an input variable time that represents time in the format of:
"00:00:00" (hour:minutes:seconds)
Write a function to extract the hour part from the input variable time
and return it as an integer. For example:
1) if hour is 00, your code should return 0
2) if hour is 01, your code should return 1
3) if hour is 21, your code should return 21
Please return hour as an integer.
'''
# Python string slicing, returns from the begining to position 1.
hour = int(time[:2])
return hour
Explanation: Unnamed: 0 C/A UNIT SCP DATEn TIMEn DESCn ENTRIESn EXITSn ENTRIESn_hourly EXITSn_hourly
0 0 A002 R051 02-00-00 05-01-11 00:00:00 REGULAR 3144312 1088151 0.0 0.0
1 1 A002 R051 02-00-00 05-01-11 04:00:00 REGULAR 3144335 1088159 23.0 8.0
2 2 A002 R051 02-00-00 05-01-11 08:00:00 REGULAR 3144353 1088177 18.0 18.0
3 3 A002 R051 02-00-00 05-01-11 12:00:00 REGULAR 3144424 1088231 71.0 54.0
4 4 A002 R051 02-00-00 05-01-11 16:00:00 REGULAR 3144594 1088275 170.0 44.0
...
Quiz 10 - Time to Hour
End of explanation
import datetime
def reformat_subway_dates(date):
'''
The dates in our subway data are formatted in the format month-day-year.
The dates in our weather underground data are formatted year-month-day.
In order to join these two data sets together, we'll want the dates formatted
the same way. Write a function that takes as its input a date in the MTA Subway
data format, and returns a date in the weather underground format.
Hint:
There are a couple of useful functions in the datetime library that will
help on this assignment, called strptime and strftime.
More info can be seen here and further in the documentation section:
http://docs.python.org/2/library/datetime.html#datetime.datetime.strptime
'''
# Notice that the year in the MTA Subway format is year without century (99, 00, 01)
date_formatted = datetime.datetime.strptime(date, '%m-%d-%y').strftime('%Y-%m-%d')
return date_formatted
Explanation: Unnamed: 0 UNIT DATEn TIMEn DESCn ENTRIESn_hourly EXITSn_hourly Hour
0 0 R022 05-01-11 00:00:00 REGULAR 0.0 0.0 0
1 1 R022 05-01-11 04:00:00 REGULAR 562.0 173.0 4
2 2 R022 05-01-11 08:00:00 REGULAR 160.0 194.0 8
3 3 R022 05-01-11 12:00:00 REGULAR 820.0 1025.0 12
4 4 R022 05-01-11 16:00:00 REGULAR 2385.0 1954.0 16
5 5 R022 05-01-11 20:00:00 REGULAR 3631.0 2010.0 20
...
Quiz 11 - Reformat Subway Dates
End of explanation |
9,062 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have a csv file which looks like | Problem:
from sklearn.cluster import KMeans
df = load_data()
kmeans = KMeans(n_clusters=2)
labels = kmeans.fit_predict(df[['mse']]) |
9,063 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Load and pre-process data
Step1: Impute PE
First, I will impute PE by replacing missing values with the mean PE. Second, I will impute PE using a random forest regressor. I will compare the results by looking at the average RMSE's by performing the method across all wells with PE data (leaving each well out as a test set).
Impute PE through mean substitution
To evaluate - I will build a model for each well (the data for that well being the test data). Then I'll compute the RMSE for each model where we know the outcomes (the actual PE) to give us an idea of how good the model is.
Step2: Impute PE through random forest regression
Using mean substitution as a method for PE imputing has an expected RMSE of just over 1.00. Let's see if I can do better using a random forest regressor.
Step3: This approach gives us an expected RMSE of about 0.575 - now let's impute the missing data using this approach!
Step4: Now we have a full data set with no missing values!
Feature engineering
I'm going to now calculate the average value of each log feature on a by facies basis. For instance, I will calculate the distance of an observation's GR reading from the MS GR average. The idea being that true MS's will be close to that average! I will be squaring the observation deviations from these averages to make it more of a data-distance proxy.
Step5: I proceed to run Paolo Bestagini's routines to include a small window of values to account for the spatial component in the log analysis, as well as gradient information with respect to depth.
Step6: Now I'll apply the Paolo routines to the data - augmenting the features!
Step7: Tuning and Cross-Validation
Step8: Apply tuning to search for optimal hyperparameters.
Step9: Through tuning we observe optimal hyperparameters to be 250 (number of estimators), 2 (minimum number of samples per leaf), 75 (maximum number of features to consider when looking for the optimal split), and 5 (minimum number of samples required to split a node). These values yielded an average F1-score of 0.584 through cross-validation.
Prediction
Before applying out algorithm to the test data, I must apply the feature engineering to the test data. This involves calculating the data deviations from the facies averages and applying Paolo Bestagini's routines.
Step10: Now I will apply Paolo Bestagini's routines. | Python Code:
from sklearn import preprocessing
filename = '../facies_vectors.csv'
train = pd.read_csv(filename)
# encode well name and formation features
le = preprocessing.LabelEncoder()
train["Well Name"] = le.fit_transform(train["Well Name"])
train["Formation"] = le.fit_transform(train["Formation"])
data_loaded = train.copy()
# cleanup memory
del train
data_loaded
Explanation: Load and pre-process data
End of explanation
from sklearn import preprocessing
data = data_loaded.copy()
impPE_features = ['Facies', 'Formation', 'Well Name', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']
rmse = []
for w in data["Well Name"].unique():
wTrain = data[(data["PE"].notnull()) & (data["Well Name"] != w)]
wTest = data[(data["PE"].notnull()) & (data["Well Name"] == w)]
if wTest.shape[0] > 0:
yTest = wTest["PE"].values
meanPE = wTrain["PE"].mean()
wTest["predictedPE"] = meanPE
rmse.append((((yTest - wTest["predictedPE"])**2).mean())**0.5)
print(rmse)
print("Average RMSE:" + str(sum(rmse)/len(rmse)))
# cleanup memory
del data
Explanation: Impute PE
First, I will impute PE by replacing missing values with the mean PE. Second, I will impute PE using a random forest regressor. I will compare the results by looking at the average RMSE's by performing the method across all wells with PE data (leaving each well out as a test set).
Impute PE through mean substitution
To evaluate - I will build a model for each well (the data for that well being the test data). Then I'll compute the RMSE for each model where we know the outcomes (the actual PE) to give us an idea of how good the model is.
End of explanation
from sklearn.ensemble import RandomForestRegressor
data = data_loaded.copy()
impPE_features = ['Facies', 'Formation', 'Well Name', 'GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'NM_M', 'RELPOS']
rf = RandomForestRegressor(max_features='sqrt', n_estimators=100, random_state=1)
rmse = []
for w in data["Well Name"].unique():
wTrain = data[(data["PE"].isnull() == False) & (data["Well Name"] != w)]
wTest = data[(data["PE"].isnull() == False) & (data["Well Name"] == w)]
if wTest.shape[0] > 0:
XTrain = wTrain[impPE_features].values
yTrain = wTrain["PE"].values
XTest = wTest[impPE_features].values
yTest = wTest["PE"].values
w_rf = rf.fit(XTrain, yTrain)
predictedPE = w_rf.predict(XTest)
rmse.append((((yTest - predictedPE)**2).mean())**0.5)
print(rmse)
print("Average RMSE:" + str(sum(rmse)/len(rmse)))
# cleanup memory
del data
Explanation: Impute PE through random forest regression
Using mean substitution as a method for PE imputing has an expected RMSE of just over 1.00. Let's see if I can do better using a random forest regressor.
End of explanation
data = data_loaded.copy()
rf_train = data[data['PE'].notnull()]
rf_test = data[data['PE'].isnull()]
xTrain = rf_train[impPE_features].values
yTrain = rf_train["PE"].values
xTest = rf_test[impPE_features].values
rf_fit = rf.fit(xTrain, yTrain)
predictedPE = rf_fit.predict(xTest)
data["PE"][data["PE"].isnull()] = predictedPE
data_imputed = data.copy()
# cleanup memory
del data
# output
data_imputed
Explanation: This approach gives us an expected RMSE of about 0.575 - now let's impute the missing data using this approach!
End of explanation
facies_labels = ['SS','CSiS','FSiS','SiSh','MS','WS','D','PS','BS']
data = data_imputed.copy()
features = ["GR", "ILD_log10", "DeltaPHI", "PHIND", "PE"]
for f in features:
facies_mean = data[f].groupby(data["Facies"]).mean()
for i in range(0, len(facies_mean)):
data[f + "_" + facies_labels[i] + "_SqDev"] = (data[f] - facies_mean.values[i])**2
data_fe = data.copy()
del data
data_fe
Explanation: Now we have a full data set with no missing values!
Feature engineering
I'm going to now calculate the average value of each log feature on a by facies basis. For instance, I will calculate the distance of an observation's GR reading from the MS GR average. The idea being that true MS's will be close to that average! I will be squaring the observation deviations from these averages to make it more of a data-distance proxy.
End of explanation
# Feature windows concatenation function
def augment_features_window(X, N_neig):
# Parameters
N_row = X.shape[0]
N_feat = X.shape[1]
# Zero padding
X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))
# Loop over windows
X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))
for r in np.arange(N_row)+N_neig:
this_row = []
for c in np.arange(-N_neig,N_neig+1):
this_row = np.hstack((this_row, X[r+c]))
X_aug[r-N_neig] = this_row
return X_aug
# Feature gradient computation function
def augment_features_gradient(X, depth):
# Compute features gradient
d_diff = np.diff(depth).reshape((-1, 1))
d_diff[d_diff==0] = 0.001
X_diff = np.diff(X, axis=0)
X_grad = X_diff / d_diff
# Compensate for last missing value
X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))
return X_grad
# Feature augmentation function
def augment_features(X, well, depth, N_neig=1):
# Augment features
X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))
for w in np.unique(well):
w_idx = np.where(well == w)[0]
X_aug_win = augment_features_window(X[w_idx, :], N_neig)
X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])
X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)
# Find padded rows
padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])
return X_aug, padded_rows
Explanation: I proceed to run Paolo Bestagini's routines to include a small window of values to account for the spatial component in the log analysis, as well as gradient information with respect to depth.
End of explanation
data = data_fe.copy()
remFeatures = ["Facies", "Well Name", "Depth"]
x = list(data)
features = [f for f in x if f not in remFeatures]
X = data[features].values
y = data["Facies"].values
# Store well labels and depths
well = data['Well Name']
depth = data['Depth'].values
X_aug, padded_rows = augment_features(X, well.values, depth)
Explanation: Now I'll apply the Paolo routines to the data - augmenting the features!
End of explanation
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score
#from classification_utilities import display_cm, display_adj_cm
# 1) loops through wells - splitting data (current well held out as CV/test)
# 2) trains model (using all wells excluding current)
# 3) evaluates predictions against known values and adds f1-score to array
# 4) returns average f1-score (expected f1-score)
def cvTrain(X, y, well, params):
rf = RandomForestClassifier(max_features=params['M'], n_estimators=params['N'], criterion='entropy',
min_samples_split=params['S'], min_samples_leaf=params['L'], random_state=1)
f1 = []
for w in well.unique():
Xtrain_w = X[well.values != w]
ytrain_w = y[well.values != w]
Xtest_w = X[well.values == w]
ytest_w = y[well.values == w]
w_rf = rf.fit(Xtrain_w, ytrain_w)
predictedFacies = w_rf.predict(Xtest_w)
f1.append(f1_score(ytest_w, predictedFacies, average='micro'))
f1 = (sum(f1)/len(f1))
return f1
Explanation: Tuning and Cross-Validation
End of explanation
# parameters search grid (uncomment for full grid search - will take a long time)
N_grid = [250] #[50, 250, 500] # n_estimators
M_grid = [75] #[25, 50, 75] # max_features
S_grid = [5] #[5, 10] # min_samples_split
L_grid = [2] #[2, 3, 5] # min_samples_leaf
# build grid of hyperparameters
param_grid = []
for N in N_grid:
for M in M_grid:
for S in S_grid:
for L in L_grid:
param_grid.append({'N':N, 'M':M, 'S':S, 'L':L})
# loop through parameters and cross-validate models for each
for params in param_grid:
print(str(params) + ' Average F1-score: ' + str(cvTrain(X_aug, y, well, params)))
Explanation: Apply tuning to search for optimal hyperparameters.
End of explanation
from sklearn import preprocessing
filename = '../validation_data_nofacies.csv'
test = pd.read_csv(filename)
# encode well name and formation features
le = preprocessing.LabelEncoder()
test["Well Name"] = le.fit_transform(test["Well Name"])
test["Formation"] = le.fit_transform(test["Formation"])
test_loaded = test.copy()
facies_labels = ['SS','CSiS','FSiS','SiSh','MS','WS','D','PS','BS']
train = data_imputed.copy()
features = ["GR", "ILD_log10", "DeltaPHI", "PHIND", "PE"]
for f in features:
facies_mean = train[f].groupby(train["Facies"]).mean()
for i in range(0, len(facies_mean)):
test[f + "_" + facies_labels[i] + "_SqDev"] = (test[f] - facies_mean.values[i])**2
test_fe = test.copy()
del test
test_fe
Explanation: Through tuning we observe optimal hyperparameters to be 250 (number of estimators), 2 (minimum number of samples per leaf), 75 (maximum number of features to consider when looking for the optimal split), and 5 (minimum number of samples required to split a node). These values yielded an average F1-score of 0.584 through cross-validation.
Prediction
Before applying out algorithm to the test data, I must apply the feature engineering to the test data. This involves calculating the data deviations from the facies averages and applying Paolo Bestagini's routines.
End of explanation
test = test_fe.copy()
remFeatures = ["Well Name", "Depth"]
x = list(test)
features = [f for f in x if f not in remFeatures]
Xtest = test[features].values
# Store well labels and depths
welltest = test['Well Name']
depthtest = test['Depth'].values
Xtest_aug, test_padded_rows = augment_features(Xtest, welltest.values, depthtest)
from sklearn.ensemble import RandomForestClassifier
test = test_loaded.copy()
rf = RandomForestClassifier(max_features=75, n_estimators=250, criterion='entropy',
min_samples_split=5, min_samples_leaf=2, random_state=1)
fit = rf.fit(X_aug, y)
predictedFacies = fit.predict(Xtest_aug)
test["Facies"] = predictedFacies
test.to_csv('jpoirier011_submission001.csv')
Explanation: Now I will apply Paolo Bestagini's routines.
End of explanation |
9,064 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Learning
Assignment 5
The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.
Step2: Download the data from the source website if necessary.
Step4: Read the data into a string.
Step5: Build the dictionary and replace rare words with UNK token.
Step6: Function to generate a training batch for the skip-gram model.
Step7: Train a skip-gram model.
Step8: Problem
An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset. | Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import collections
import math
import numpy as np
import os
import random
import tensorflow as tf
import zipfile
from matplotlib import pylab
from six.moves import range
from six.moves.urllib.request import urlretrieve
from sklearn.manifold import TSNE
Explanation: Deep Learning
Assignment 5
The goal of this assignment is to train a Word2Vec skip-gram model over Text8 data.
End of explanation
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
Download a file if not present, and make sure it's the right size.
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
Explanation: Download the data from the source website if necessary.
End of explanation
def read_data(filename):
Extract the first file enclosed in a zip file as a list of words
with zipfile.ZipFile(filename) as f:
data = tf.compat.as_str(f.read(f.namelist()[0])).split()
return data
words = read_data(filename)
print('Data size %d' % len(words))
Explanation: Read the data into a string.
End of explanation
vocabulary_size = 50000
def build_dataset(words):
count = [['UNK', -1]]
count.extend(collections.Counter(words).most_common(vocabulary_size - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
if word in dictionary:
index_into_count = dictionary[word]
else:
index_into_count = 0 # dictionary['UNK']
unk_count = unk_count + 1
data.append(index_into_count)
count[0][1] = unk_count
reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reverse_dictionary
data, count, dictionary, reverse_dictionary = build_dataset(words)
print('Most common words (+UNK)', count[:5])
print('Sample data', data[:10])
del words # Hint to reduce memory.
Explanation: Build the dictionary and replace rare words with UNK token.
End of explanation
data_index = 0
def generate_batch(batch_size, num_skips, skip_window):
global data_index
assert batch_size % num_skips == 0
assert num_skips <= 2 * skip_window
batch = np.ndarray(shape=(batch_size), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2 * skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size // num_skips):
target = skip_window # target label at the center of the buffer
targets_to_avoid = [ skip_window ]
for j in range(num_skips):
while target in targets_to_avoid:
target = random.randint(0, span - 1)
targets_to_avoid.append(target)
batch[i * num_skips + j] = buffer[skip_window]
labels[i * num_skips + j, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for num_skips, skip_window in [(2, 1), (4, 2)]:
data_index = 0
batch, labels = generate_batch(batch_size=8, num_skips=num_skips, skip_window=skip_window)
print('\nwith num_skips = %d and skip_window = %d:' % (num_skips, skip_window))
print(' batch:', [reverse_dictionary[bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
Explanation: Function to generate a training batch for the skip-gram model.
End of explanation
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
train_dataset = tf.placeholder(tf.int32, shape=[batch_size])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
embed = tf.nn.embedding_lookup(embeddings, train_dataset)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
# Note: The optimizer will optimize the softmax_weights AND the embeddings.
# This is because the embeddings are defined as a variable quantity and the
# optimizer's `minimize` method will by default modify all variable quantities
# that contribute to the tensor it is passed.
# See docs on `tf.train.Optimizer.minimize()` for more details.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, num_skips, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in range(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
num_points = 400
tsne = TSNE(perplexity=30, n_components=2, init='pca', n_iter=5000)
two_d_embeddings = tsne.fit_transform(final_embeddings[1:num_points+1, :])
# why ruclidean distance here, and not cosine?
def plot(embeddings, labels):
assert embeddings.shape[0] >= len(labels), 'More labels than embeddings'
pylab.figure(figsize=(15,15)) # in inches
for i, label in enumerate(labels):
x, y = embeddings[i,:]
pylab.scatter(x, y)
pylab.annotate(label, xy=(x, y), xytext=(5, 2), textcoords='offset points',
ha='right', va='bottom')
pylab.show()
words = [reverse_dictionary[i] for i in range(1, num_points+1)]
plot(two_d_embeddings, words)
Explanation: Train a skip-gram model.
End of explanation
data_index = 0
def generate_batch(batch_size, skip_window):
assert skip_window == 1 # Handling of this value is hard-coded here.
global data_index
batch = np.ndarray(shape=(batch_size, 2), dtype=np.int32)
labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
span = 2*skip_window + 1 # [ skip_window target skip_window ]
buffer = collections.deque(maxlen=span)
for _ in range(span):
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
for i in range(batch_size):
target = skip_window # target label at the center of the buffer
batch[i, 0] = buffer[skip_window-1]
batch[i, 1] = buffer[skip_window+1]
labels[i, 0] = buffer[target]
buffer.append(data[data_index])
data_index = (data_index + 1) % len(data)
return batch, labels
print('data:', [reverse_dictionary[di] for di in data[:8]])
for skip_window in [1]:
data_index = 0
batch, labels = generate_batch(batch_size=8, skip_window=skip_window)
print('\nwith skip_window = %d:' % skip_window)
print(' batch:', [[reverse_dictionary[m] for m in bi] for bi in batch])
print(' labels:', [reverse_dictionary[li] for li in labels.reshape(8)])
batch_size = 128
embedding_size = 128 # Dimension of the embedding vector.
skip_window = 1 # How many words to consider left and right.
num_skips = 2 # How many times to reuse an input to generate a label.
# We pick a random validation set to sample nearest neighbors. here we limit the
# validation samples to the words that have a low numeric ID, which by
# construction are also the most frequent.
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100 # Only pick dev samples in the head of the distribution.
valid_examples = np.array(random.sample(range(valid_window), valid_size))
num_sampled = 64 # Number of negative examples to sample.
graph = tf.Graph()
with graph.as_default(), tf.device('/cpu:0'):
# Input data.
span = 2*skip_window + 1 # [ skip_window target skip_window ]
train_dataset = tf.placeholder(tf.int32, shape=[batch_size, (span-1)])
train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# Variables.
embeddings = tf.Variable(
tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
softmax_weights = tf.Variable(
tf.truncated_normal([vocabulary_size, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
softmax_biases = tf.Variable(tf.zeros([vocabulary_size]))
# Model.
# Look up embeddings for inputs.
assert skip_window == 1 # Handling of this value is hard-coded here.
embed0 = tf.nn.embedding_lookup(embeddings, train_dataset[:,0])
embed1 = tf.nn.embedding_lookup(embeddings, train_dataset[:,1])
embed = (embed0 + embed1)/(span-1)
# Compute the softmax loss, using a sample of the negative labels each time.
loss = tf.reduce_mean(
tf.nn.sampled_softmax_loss(softmax_weights, softmax_biases, embed,
train_labels, num_sampled, vocabulary_size))
# Optimizer.
optimizer = tf.train.AdagradOptimizer(1.0).minimize(loss)
# Compute the similarity between minibatch examples and all embeddings.
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
normalized_embeddings = embeddings / norm
valid_embeddings = tf.nn.embedding_lookup(
normalized_embeddings, valid_dataset)
similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
num_steps = 100001
with tf.Session(graph=graph) as session:
tf.initialize_all_variables().run()
print('Initialized')
average_loss = 0
for step in range(num_steps):
batch_data, batch_labels = generate_batch(
batch_size, skip_window)
feed_dict = {train_dataset : batch_data, train_labels : batch_labels}
_, l = session.run([optimizer, loss], feed_dict=feed_dict)
average_loss += l
if step % 2000 == 0:
if step > 0:
average_loss = average_loss / 2000
# The average loss is an estimate of the loss over the last 2000 batches.
print('Average loss at step %d: %f' % (step, average_loss))
average_loss = 0
# note that this is expensive (~20% slowdown if computed every 500 steps)
if step % 10000 == 0:
sim = similarity.eval()
for i in xrange(valid_size):
valid_word = reverse_dictionary[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in xrange(top_k):
close_word = reverse_dictionary[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
final_embeddings = normalized_embeddings.eval()
Explanation: Problem
An alternative to skip-gram is another Word2Vec model called CBOW (Continuous Bag of Words). In the CBOW model, instead of predicting a context word from a word vector, you predict a word from the sum of all the word vectors in its context. Implement and evaluate a CBOW model trained on the text8 dataset.
End of explanation |
9,065 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Basic plot example
Step1: $$c = \sqrt{a^2 + b^2}$$
$$
\begin{align}
c =& \sqrt{a^2 + b^2} \
=&\sqrt{4+16} \
\end{align}
$$
$$
\begin{align}
f(x)= & x^2 \
= & {{x[1]}}
\end{align}
$$
This text contains a value like $x[1]$ is {{x[1]}}
Widget example
Step6: Here we will set the fields to one of several values so that we can see pre-configured examples. | Python Code:
from matplotlib.pyplot import figure, plot, xlabel, ylabel, title, show
x=linspace(0,5,10)
y=x**2
figure()
plot(x,y,'r')
xlabel('x')
ylabel('y')
title('title')
show()
Explanation: Basic plot example
End of explanation
from IPython.display import display
text = widgets.FloatText()
floatText = widgets.FloatText(description='MyField',min=-5,max=5)
floatSlider = widgets.FloatSlider(description='MyField',min=-5,max=5)
#https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Basics.html
float_link = widgets.jslink((floatText, 'value'), (floatSlider, 'value'))
Explanation: $$c = \sqrt{a^2 + b^2}$$
$$
\begin{align}
c =& \sqrt{a^2 + b^2} \
=&\sqrt{4+16} \
\end{align}
$$
$$
\begin{align}
f(x)= & x^2 \
= & {{x[1]}}
\end{align}
$$
This text contains a value like $x[1]$ is {{x[1]}}
Widget example
End of explanation
floatSlider.value=1
bSatiationPreset=widgets.Button(
description='Salt Satiation',
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me'
)
bDeprivationPreset=widgets.Button(
description='Salt Deprivation2',
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me'
)
def bDeprivationPreset_on_click(b):
floatSlider.value=0.5
def bSatiationPreset_on_click(b):
floatSlider.value=0.71717
bDeprivationPreset.on_click(bDeprivationPreset_on_click)
bSatiationPreset.on_click(bSatiationPreset_on_click)
#floatSlider.observe(bDeprivationPreset_on_click,names='value')
myw=widgets.HTMLMath('$a+b=$'+str(floatSlider.value))
display(floatText,floatSlider)
display(bSatiationPreset)
display(bDeprivationPreset)
display(myw)
txtArea = widgets.Text()
display(txtArea)
myb= widgets.Button(description="234")
def add_text(b):
txtArea.value = txtArea.value + txtArea.value
myb.on_click(add_text)
display(myb)
from IPython.display import display, Markdown, Latex
#display(Markdown('*some markdown* $\phi$'))
# If you particularly want to display maths, this is more direct:
display(Latex('\phi=' + str(round(floatText.value,2))))
display(Latex('$\begin{align}\phi=&' + str(round(floatText.value,2)) + " \\ =& \alpha\end{align}$"))
display(Markdown('$$'))
display(Markdown('\\begin{align}'))
display( '\phi=' + str(round(floatText.value,2)))
display(Markdown('\\end{align}'))
display(Markdown('$$'))
from IPython.display import Markdown
one = 1
two = 2
myanswer = one + two**2
Markdown("# Title")
Markdown(
# Math
## Addition
Here is a simple addition example: ${one} + {two}^2 = {myanswer}$
Here is a multi-line equation
$$
\\begin{{align}}
\\begin{{split}}
c = & a + b \\\\
= & {one} + {two}^2 \\\\
= & {myanswer}
\end{{split}}
\end{{align}}
$$
.format(one=one, two=two, myanswer=myanswer))
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def g(a, b, c):
print('${}*{}*{}={}$'.format(a, b, c, a*b*c))
def h(a, b, c):
print(Markdown('${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c)))
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
def f2(a, b, c):
widgets.HTMLMath(value=Markdown('${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c))._repr_markdown_())
def f3(a, b, c):
print(Markdown('${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c))._repr_markdown_())
def f4(a, b, c):
return Markdown('${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c))._repr_markdown_()
def f5(a, b, c):
print('${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c))
def f6(a, b, c):
display(widgets.HTMLMath(value='${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c)))
def f7(a, b, c):
display(Markdown('${}\\times{}\\times{}={}$'.format(a, b, c, a*b*c)))
out = widgets.interactive_output(f7, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
#widgets.HBox([widgets.VBox([a, b, c]), widgets.HTMLMath('$x=y+z$')])
Markdown('$${}\\times{}\\times{}={}$$'.format(a.value, b.value, c.value, a.value*b.value*c.value))
myMarkDown
%matplotlib inline
from ipywidgets import interactive
import matplotlib.pyplot as plt
import numpy as np
def f(m, b):
plt.figure(2)
x = np.linspace(-10, 10, num=1000)
plt.plot(x, m * x + b)
plt.ylim(-5, 5)
plt.show()
interactive_plot = interactive(f, m=(-2.0, 2.0), b=(-3, 3, 0.5))
output = interactive_plot.children[-1]
output.layout.height = '350px'
interactive_plot
def g(x,y):
return Markdown(
$$
{x}\\times {y}={z}
$$
.format(x=3,y=4,z=4*5))
print(g(6,7))
x=3
y=4
Markdown(
$$
\\begin{{align}}
\\begin{{split}}
z & =x \\times y \\\\
& = {x} \\times {y} \\\\
& = {z}
\end{{split}}
\end{{align}}
$$
.format(x=x,y=y,z=x*y))
Latex(
$$
\\begin{{align}}
\\begin{{split}}
z & =x \\times y \\\\
& = {x} \\times {y} \\\\
& = {z}
\end{{split}}
\end{{align}}
$$
.format(x=x,y=y,z=x*y))
Explanation: Here we will set the fields to one of several values so that we can see pre-configured examples.
End of explanation |
9,066 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data wrangling
Who we are
Jackie Kazil (@JackieKazil), author of the O'Reilly book Data Wrangling with Python
Abe Epton (@aepton)
What we'll cover
Loading data
Transforming it
Storing it
How we'll do it
We'll be working with a dataset, contracts_data.csv in the data/ folder of the python-data-wrangling repository. It contains contract data from USASpending.gov from FY 2015 where Colorado was the recipient state. We'll use it to answer some real-life questions and provide examples.
Loading data
Loading from CSVs
The agate library (formerly csvkit and journalism), by former Chicago Tribune developer Chris Groskopf, was built with journalists handling CSVs in mind (here's the documentation). The fundamental unit in agate is the table, and there's a one-step method of creating a table from a CSV file
Step1: Preview the data
Now, let's preview a few of the records to see what the data that we are going to be working with looks like. For this by iterating over the rows in the table. To access the rows we interate over table.rows. Since this is a big file, we are going to return the first few rows by slicing (ie [
Step2: A quick peek inside the data
We can use the print_table function to just look at a few rows at a time, and within them, look at a few columns formatted to be readable by humans
Step3: Another way to load CSVs
It's worth mentioning that there's another way to load data from a CSV
Step4: Loading from JSON
The json module includes two main methods
Step5: Loading data from another sources
XML, Excel, and other data formats
There are mant different data formats that exist in the world. We are not going to cover them. Just know that there is a library almost all of them. For example to load XML, you might use Element Tree or lxml, and to load Excel data, you would use openpyxl or xlrd.
Sometimes libraries are clunky and horrible to use. For example, let's say you didn't like how the xlrd library to import Excel data worked. If you didn't have a lot of sheets in your Excel workbook to export, a way around this is to export is manually to CSV first. Besides the manual route, there are a host of tools online and on the commandline that convert data formats from one to another.
Loading from an API
API stands for Application Programming Interface, but what does that mean? It is basically a end point available over the web for you to pick data up from. Before you try to gather data from an API by writing your own code, you should search for a library that does it for you. Larger, more well know APIs such as Twitter's API has multiple libraries in Python. Somtimes these are written by the provider and sometimes by an outside party.
If nothing exists, then use the Requests library to hit the API end point to gather the data. The data will most likely be in one the formats already mentioned (i.e. JSON or CSV). Then you will save that locally and continue the loading process.
API Keys and tokens. Some APIs have keys and also sometimes tokens to limit how much the user can pull data from the API and possibly track their usage. While this isn't covered, you will need to add a few steps to account for this. See the example below to see how this is done.
token = #Your Token
headers = {'Authorization'
Step6: XML is a little more complicated to parse, so default to JSON or CSV when possible. This API doesn't make it possible.
Transforming data
Summing up a column of data
As with many tasks in Python, this can be done in several ways. The agate library is great because it makes this trivially easy
Step7: It's not much harder to do without agate, however. We can just initialize a counter, loop over each row, and update the counter
Step8: You'll notice that we didn't simply do
Step9: Filtering rows of data
The basic steps we'll take to filter our list of data won't be much different from what we did above, to compute the total of the 'dollarsobligated' column. We create a new list to store our results, loop through the current list, and when we find a row that matches our criteria, we add it to the new list
Step10: Of course this would be shorter in agate. But there's a shorter way in vanilla Python as well, using a very useful shorthand technique called list comprehensions
Step11: If you don't feel comfortable with list comprehensions, don't worry - they're a useful shorthand, but they'll feel more intuitive once you've written enough Python loops to make your fingers bleed. They're not really doing anything different from our first example, and while they're cute and compact, they can be tricky once you need to do anything more than very basic stuff, like filtering based on a single criterion.
Here's how you would accomplish the same filtering task in agate
Step12: Sorting rows of data
The sorted function takes a list and returns a sorted version. It expects an iterable, like a list, and by default will just compare every element to every other in order to alphabetically sort them.
You can tell sorted how to compare two elements in the iterable, which allows us to get a version of csv_data sorted by vendor name in just one line
Step13: The agate version of this is a bit more direct. order_by takes the name(s) of a column and returns a table; select filters a table by column(s).
Step14: Geocoding addresses
We'll be using the excellent library geopy (docs; run pip install geopy if you don't have it on your machine), which provides a common, simple interface to a variety of different geocoding services. It's worth noting that not all geocoding services work equally well, and they often have limits on how many requests you can make in a short amount of time. So if you're going to geocode a large number of addresses, you'll need to figure out which service is best for you.
To create an instance of a geocoder using a particular service, first import the appropriate class
Step15: Comparing dates and date strings
The standard datetime module and the excellent strftime.org cheat sheet (seriously, bookmark it) make Python able to translate between a really delicious variety of date and time formats.
To work with dates in our data, we first need to convert strings containing dates to actual date objects. To see why, let's ask the question
Step16: This is because, when Python is comparing strings to each other (and both of the above dates, despite looking date-like, are just strings of text) it defaults to comparing them alphabetically. Does the first letter of string A come before the first letter of string B? If so, A < B.
So, we need to tell Python how to convert our string of arbitrary text into a datetime object. Once we do that, we get all kinds of superpowers - we can add and subtract time from a date, compare dates to each other, adjust the timezone of our date, pull out just the month or year, determine what day of the week it was, and on and on.
The datetime module provides several types of date-related classes we can use (in particular, date, time and datetime, which combines the first two) but for now we'll just rely on datetime. Annoyingly, datetime is both the name of a module, and the name of a class within that module, so we have to do dumb stuff like this
Step17: Storing data
Saving data as a CSV
This will vary somewhat depending on what type of data you have, and in what form. Generally, I find that CSVs are best thought of as lists of dictionaries - each line in the file is an item in the list, and each line contains a dictionary's worth of data. For that reason, it's a good idea to coerce your data into a list of dictionaries (all with the same set of keys) before writing them to a file.
We'll start with the non-agate version first, so you can see each step as it's happening. We'll use the DictWriter class, much as we used the DictReader class to read data in from a CSV.
Step18: As you may have suspected, this is much quicker to write in agate. We really only need one line
Step19: Saving data as JSON
The json module makes it as easy to write JSON as it is to read it. However, since JSON is a very verbose format, it can make for very large files. The contracts_data.csv file, above, is 1.4 MB (enough to fit on a floppy!) - but the exact same data, stored in JSON, is over three times larger - 4.6 MB.
So it's worth considering for a moment whether storing data in JSON is what you really want to do. It's great for Javascript web apps, for instance, because it's highly flexible and self-documenting, and therefore easy for programs to read it and work with it. But, particularly if your data has a large number of columns, the size of your JSON files can get very large very quickly (because every single row will repeat the name of every single column, plus 4 " characters and a
Step20: Once again, this is one line in agate
Step21: Bonus Round | Python Code:
import agate
table = agate.Table.from_csv('data/contracts_data.csv')
print table
Explanation: Data wrangling
Who we are
Jackie Kazil (@JackieKazil), author of the O'Reilly book Data Wrangling with Python
Abe Epton (@aepton)
What we'll cover
Loading data
Transforming it
Storing it
How we'll do it
We'll be working with a dataset, contracts_data.csv in the data/ folder of the python-data-wrangling repository. It contains contract data from USASpending.gov from FY 2015 where Colorado was the recipient state. We'll use it to answer some real-life questions and provide examples.
Loading data
Loading from CSVs
The agate library (formerly csvkit and journalism), by former Chicago Tribune developer Chris Groskopf, was built with journalists handling CSVs in mind (here's the documentation). The fundamental unit in agate is the table, and there's a one-step method of creating a table from a CSV file:
data = agate.Table.from_csv('data.csv')
Our data is located in the data/contracts_data.csv file. Let's load it into an agate table and check out how the example works. Think about how you'd access a row's data. How would you answer a question like "sum up all the values of a column in this spreadsheet?"
End of explanation
for row in table.rows[:3]:
for column in row.keys():
print '%s: %s' % (column, row[column])
print '--------------------------------------------------------------'
Explanation: Preview the data
Now, let's preview a few of the records to see what the data that we are going to be working with looks like. For this by iterating over the rows in the table. To access the rows we interate over table.rows. Since this is a big file, we are going to return the first few rows by slicing (ie [:3]) table.rows.
To get all the columns headers, we use row.keys() for the each row. Then we iterate over each of those and output the column header, and then the corresponding value through a lookup (i.e. row[column]).
Finally, the last line of dashes (--------------------) is a visual output for us to easily identify where one record ends and the next one begins.
End of explanation
table.print_table(max_columns=4, max_rows=10)
Explanation: A quick peek inside the data
We can use the print_table function to just look at a few rows at a time, and within them, look at a few columns formatted to be readable by humans:
End of explanation
from csv import DictReader
from pprint import pprint # Pprint give you a prettier output
csv_data = []
with open('data/contracts_data.csv') as datafile:
reader = DictReader(datafile)
for row in reader:
# each row is now a dict; each column in the CSV is a key in the dict
csv_data.append(row)
print 'Found %d lines' % len(csv_data)
# Let's preview the first record.
pprint(csv_data[0])
Explanation: Another way to load CSVs
It's worth mentioning that there's another way to load data from a CSV: csv.DictReader.
What is the difference between agate & csv libraries?
* csv is a part of core Python, which means there are no addition libraries to install
* agate is built on top of csv and adds additional features to handling data
* csv provides direct access to features such as writing csvs
* agate handles a few of the processing elements in the background with the ability to override, while csv will sometimes require explicit arguments to be passed for a file to load
* agate provides features for data wrangling, which is why we are using it
Note: One is not better than the other. They have different uses with overlapping features.
Even though we are using agate, you should know how to import using the built-in csv library, because sometimes you just need to import a csv. To load a csv, we will use the DictReader method, which returns a list of dictionaries. Each dictionary is one row (or record) in the csv, with the header row (assuming there is one) converted into the dictionary's keys.
End of explanation
import json
from pprint import pprint
with open('data/contracts_data.json') as json_data:
data = json.load(json_data)
# Let's preview the 4th record
pprint(data[3])
Explanation: Loading from JSON
The json module includes two main methods: loads to load JSON data, and dumps to create JSON from Python objects. If you haven't worked with JSON before, it's a very convenient way of passing around data objects using pure text.
loads just takes a single string of JSON-formatted data as an argument, and returns a Python object.
data = json.loads('{"foo":"bar"}')
print data['foo']
displays bar.
Let's apply this to our contracts data.
End of explanation
import requests
# Example URL from API documentation page
url = 'https://www.usaspending.gov/fpds/fpds.php?detail=b&fiscal_year=2015&stateCode=TX&max_records=10'
response = requests.get(url)
print response
print(response.content)
import xml.etree.ElementTree as ET
root = ET.fromstring(response.content)
for child in root:
print child.tag, '-', child.attrib
results = root[1]
for record in results:
print record
# Let's look at the first record
for record in results[:1]:
# Let's iterate over the colummns for the record and pull out the data
for column in record:
print column.tag, '---', column.text
Explanation: Loading data from another sources
XML, Excel, and other data formats
There are mant different data formats that exist in the world. We are not going to cover them. Just know that there is a library almost all of them. For example to load XML, you might use Element Tree or lxml, and to load Excel data, you would use openpyxl or xlrd.
Sometimes libraries are clunky and horrible to use. For example, let's say you didn't like how the xlrd library to import Excel data worked. If you didn't have a lot of sheets in your Excel workbook to export, a way around this is to export is manually to CSV first. Besides the manual route, there are a host of tools online and on the commandline that convert data formats from one to another.
Loading from an API
API stands for Application Programming Interface, but what does that mean? It is basically a end point available over the web for you to pick data up from. Before you try to gather data from an API by writing your own code, you should search for a library that does it for you. Larger, more well know APIs such as Twitter's API has multiple libraries in Python. Somtimes these are written by the provider and sometimes by an outside party.
If nothing exists, then use the Requests library to hit the API end point to gather the data. The data will most likely be in one the formats already mentioned (i.e. JSON or CSV). Then you will save that locally and continue the loading process.
API Keys and tokens. Some APIs have keys and also sometimes tokens to limit how much the user can pull data from the API and possibly track their usage. While this isn't covered, you will need to add a few steps to account for this. See the example below to see how this is done.
token = #Your Token
headers = {'Authorization':'token %s' % token}
r = requests.get(url, params=params, headers=headers)
USA Spending API. For the work that we have done so far, we have manually downloaded the data using this form. However, USAspending.gov offers can API to retrieve the data. This is great when you want to lots of data and automate the retrival.
First, check the Internet to see if a library exists to interact with the USA Spending API. If you don't find anything, check PyPI the repository for Python libraries. You will find that someone creating one for USAspending.gov. At PyPI, you can see how it is used. However, for our example API interaction, we are going to use the Requests library, which is more generally applicable.
USA Spending has 3 APIs:
1. Contracts
2. Assistance - FAADS
3. Sub-Awards
For our example, we will continue with contracts.
End of explanation
table.aggregate(agate.Sum('dollarsobligated'))
Explanation: XML is a little more complicated to parse, so default to JSON or CSV when possible. This API doesn't make it possible.
Transforming data
Summing up a column of data
As with many tasks in Python, this can be done in several ways. The agate library is great because it makes this trivially easy:
End of explanation
counter = 0
for row in csv_data:
counter += float(row['dollarsobligated'])
print 'Total is $%.2f' % counter
Explanation: It's not much harder to do without agate, however. We can just initialize a counter, loop over each row, and update the counter:
End of explanation
a = '123'
b = '456'
print a + b
Explanation: You'll notice that we didn't simply do:
counter += row['dollarsobligated']
This is because, in contrast to agate - which can automatically detect the column type based on the data inside it - DictReader assumes that every column in the csv file is a string. Adding two strings together, even strings only containing numbers, merely combines them:
End of explanation
filtered_list = []
for row in csv_data:
if row['womenownedflag'] == 'Y':
filtered_list.append(row)
print 'Found %d rows that match our criteria, out of all %d rows' % (len(filtered_list), len(csv_data))
Explanation: Filtering rows of data
The basic steps we'll take to filter our list of data won't be much different from what we did above, to compute the total of the 'dollarsobligated' column. We create a new list to store our results, loop through the current list, and when we find a row that matches our criteria, we add it to the new list:
End of explanation
lc_filtered_list = [row for row in csv_data if row['womenownedflag'] == 'Y']
print 'Found %d rows that match our criteria, out of all %d rows' % (len(lc_filtered_list), len(csv_data))
Explanation: Of course this would be shorter in agate. But there's a shorter way in vanilla Python as well, using a very useful shorthand technique called list comprehensions:
End of explanation
agate_filtered_list = table.where(lambda row: row['womenownedflag'])
print 'Found %d rows that match our criteria, out of all %d rows' % (len(agate_filtered_list.rows), len(table.rows))
Explanation: If you don't feel comfortable with list comprehensions, don't worry - they're a useful shorthand, but they'll feel more intuitive once you've written enough Python loops to make your fingers bleed. They're not really doing anything different from our first example, and while they're cute and compact, they can be tricky once you need to do anything more than very basic stuff, like filtering based on a single criterion.
Here's how you would accomplish the same filtering task in agate:
End of explanation
for row in sorted(csv_data, key=lambda row: row['vendorname']):
print row['vendorname']
Explanation: Sorting rows of data
The sorted function takes a list and returns a sorted version. It expects an iterable, like a list, and by default will just compare every element to every other in order to alphabetically sort them.
You can tell sorted how to compare two elements in the iterable, which allows us to get a version of csv_data sorted by vendor name in just one line:
End of explanation
table.order_by('vendorname').select(['vendorname', 'dollarsobligated']).print_table(max_rows=10)
Explanation: The agate version of this is a bit more direct. order_by takes the name(s) of a column and returns a table; select filters a table by column(s).
End of explanation
from geopy.geocoders import GoogleV3
geocoder = GoogleV3()
for row in table.limit(10).rows:
address = ', '.join([
row['streetaddress'],
row['city'],
row['state'],
str(row['zipcode'])[0:5]])
coords = geocoder.geocode(address)
print 'Before', address
print 'After', coords.address, coords.latitude, coords.longitude
print '------'
Explanation: Geocoding addresses
We'll be using the excellent library geopy (docs; run pip install geopy if you don't have it on your machine), which provides a common, simple interface to a variety of different geocoding services. It's worth noting that not all geocoding services work equally well, and they often have limits on how many requests you can make in a short amount of time. So if you're going to geocode a large number of addresses, you'll need to figure out which service is best for you.
To create an instance of a geocoder using a particular service, first import the appropriate class:
from geopy.geocoders import Nominatim
Then create the instance:
geocoder = Nominatim()
Once we have the geocoder instance created, using it is as simple as passing a string containing the address we're interested in:
location = geocoder.geocode("1701 California Street, Denver, CO")
And from there:
print location.latitude, location.longitude
Returns 39.7472023 -104.9904179
For instance
Let's create an instance of the Google geocoder, and use it to find the latitude and longitude of a couple of the vendors in our dataset. (Heads up: most geocoding services restrict heavy usage via IP addresses, so this classroom might get temporarily blocked and the examples may not work).
End of explanation
older = '5-13-1989'
newer = '2010-06-17'
if older < newer:
print "That's what I expect."
else:
print "Huh?"
Explanation: Comparing dates and date strings
The standard datetime module and the excellent strftime.org cheat sheet (seriously, bookmark it) make Python able to translate between a really delicious variety of date and time formats.
To work with dates in our data, we first need to convert strings containing dates to actual date objects. To see why, let's ask the question: which of these dates comes first?
End of explanation
from datetime import datetime
newer = '2010-06-17'
print datetime.strptime(newer, '%Y-%m-%d')
two_digit_year = '1/20/00'
print datetime.strptime(two_digit_year, '%m/%d/%y')
# Now you see why Y2K seemed like a big deal (does anyone even remember Y2K?) Why does it pick this date to convert to?
crazy_text_having_variable = '2013 in June on day 12'
print datetime.strptime(crazy_text_having_variable, '%Y in %B on day %d')
Explanation: This is because, when Python is comparing strings to each other (and both of the above dates, despite looking date-like, are just strings of text) it defaults to comparing them alphabetically. Does the first letter of string A come before the first letter of string B? If so, A < B.
So, we need to tell Python how to convert our string of arbitrary text into a datetime object. Once we do that, we get all kinds of superpowers - we can add and subtract time from a date, compare dates to each other, adjust the timezone of our date, pull out just the month or year, determine what day of the week it was, and on and on.
The datetime module provides several types of date-related classes we can use (in particular, date, time and datetime, which combines the first two) but for now we'll just rely on datetime. Annoyingly, datetime is both the name of a module, and the name of a class within that module, so we have to do dumb stuff like this:
from datetime import datetime
Or
import datetime
datetime.datetime.now()
I like the first one, myself. If we just wanted, say, date then we'd do:
from datetime import date
Or
import datetime
datetime.date.today()
Then we need to determine how to understand the date objects we're working with in our data (and this is where strftime.org is really useful). We do this by creating a format string, which tells datetime how our dates are structured.
Take older, above. It's date is "5-13-1989": "month hyphen day hyphen 4-digit year". In the format string language that datetime uses, that translates to "%m (month) hyphen %d (day) hyphen %Y (4-digit year)". datetime expects that the format string will also tell it about any non-date characters, so we also have to include the hyphens in our format string. The end result will look like this:
format_string = '%m-%d-%Y'
We then use the strptime function to create a datetime object from a string. We have to pass it both the string we'd like to convert, and the format string that tells us how to do so:
dt = datetime.strptime('5-13-1989', format_string)
For instance
Let's convert the dates below into datetime objects. For bonus points, try converting them into date objects. What did you have to do differently? When might one be preferred over the other?
End of explanation
from csv import DictWriter
# First, open the file. Using the mode 'w+' means create the file if it doesn't exist,
# and if it does exist, delete the file first.
with open('data/new_file.csv', 'w+') as fh:
# Next, create a DictWriter object, passing it two parameters: the file you've opened, and the column names to use
writer = DictWriter(fh, csv_data[0].keys())
# Now, make sure to include a header row at the beginning of the file, so we can work with it later
writer.writeheader()
# Finally, let's write every line in our data to the file
writer.writerows(csv_data)
Explanation: Storing data
Saving data as a CSV
This will vary somewhat depending on what type of data you have, and in what form. Generally, I find that CSVs are best thought of as lists of dictionaries - each line in the file is an item in the list, and each line contains a dictionary's worth of data. For that reason, it's a good idea to coerce your data into a list of dictionaries (all with the same set of keys) before writing them to a file.
We'll start with the non-agate version first, so you can see each step as it's happening. We'll use the DictWriter class, much as we used the DictReader class to read data in from a CSV.
End of explanation
table.to_csv('data/agate_new_file.csv')
Explanation: As you may have suspected, this is much quicker to write in agate. We really only need one line:
End of explanation
import json
with open('data/contracts_data.json', 'w+') as fh:
fh.write(json.dumps(csv_data))
Explanation: Saving data as JSON
The json module makes it as easy to write JSON as it is to read it. However, since JSON is a very verbose format, it can make for very large files. The contracts_data.csv file, above, is 1.4 MB (enough to fit on a floppy!) - but the exact same data, stored in JSON, is over three times larger - 4.6 MB.
So it's worth considering for a moment whether storing data in JSON is what you really want to do. It's great for Javascript web apps, for instance, because it's highly flexible and self-documenting, and therefore easy for programs to read it and work with it. But, particularly if your data has a large number of columns, the size of your JSON files can get very large very quickly (because every single row will repeat the name of every single column, plus 4 " characters and a : character).
Another thing to keep in mind is that some datatypes can't be represented in JSON. For instance, datetime objects need to be converted to strings first; you'll get an error if you try to save a datetime in JSON directly.
The dumps method of the json module is the exact opposite of the loads method we used above, to load JSON data. Think of dumps as meaning, "DUMP to String" and loads as meaning, "LOAD from String". Here, we just want to write one gigantic string to a file,
End of explanation
table.to_json('data/agate_new_file.json')
Explanation: Once again, this is one line in agate:
End of explanation
print table
Explanation: Bonus Round: Questions we can ask of this data
Now, you can start asking questions of this data. What are some interesting questions that this data might answer for us, and how would do that?
Which congressional districts get the most funding?
Which agencies get the most funding in my state?
[With the help of additional data] Which congressional committees do congress folk sit on and is there a coorelation between the committees they sit on and the agencies or contractors that are getting contracts?
How removed is the location of the contractor vs the place where the work is being performed?
What interesting things might there be in the walshhealyact, servicecontractact, davisbaconact, and clingercohenact columns?
What is the distribution of size of vendor to number of contracts and also to the number of contracting dollars? Is any vendor an outlier?
What are the reasons for non-competed contracts?
What percentage of contracts are in each of these categories, and are the numbers reflective of the population as a whole:
issbacertifiedsmalldisadvantagedbusiness
womenownedflag
veteranownedflag
minorityownedbusinessflag
tribalgovernmentflag
ishispanicservicinginstitution
iswomenownedsmallbusiness
isecondisadvwomenownedsmallbusiness
isjointventurewomenownedsmallbusiness
isjointventureecondisadvwomenownedsmallbusiness
What other kinds of questions can we ask?
End of explanation |
9,067 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hands-On Exercise 2
Step1: There is a lot of information for each source, and the overall image, in each of these catalog files. As a demonstration of the parameters available for each source, we will next load the file and show each of the parameters. Note - for a detailed explanation for the definition of each of these columns, please refer to the SExtractor documentation links above.
Step2: In the next step, we define a function that will read the catalog. The main SExtractor parameters that we will need are
Step3: Now, we can run the function, and determine the number of sources in our reference catalog.
Following that, we will use the Python function glob to grab all of the individual SExtractor catalogs. These files contain the epoch by epoch photometric measurements of the sources in ptfField 22683 ccd 06. The file names will be stored as epoch_catalogs.
Step4: Problem 2) Match Individual Detections to Reference Catalog Sources
The next step towards constructing light curves is one of the most difficult
Step5: With the function defined, we now populate and store the arrays with the light curve information.
Step6: At times, SExtrator will produce "measurements" that are clearly non-physical, such as magnitude measurements of 99 (while a source may be that faint, we cannot detect such a source with PTF). We will mask everything with a clearly wrong magnitude measurement.
Step7: Now that we have performed source assoiciation and populated the mags array, we can plot light curves of individual sources. Here is an example for the 63rd source in the array (recall that NumPy arrays are zero indexed).
Step8: Note that the scatter for this source is $\sim 0.11$ mag. We will later show this to be the case, but for now, trust us that this scatter is large for a source with average brightness $\sim 18.6$ mag. Either, this is a genuine variable star, with a significant decrease in brightness around MJD 56193, or, this procedure, so far, is poor.
For reasons that will become clear later, we are now going to filter our arrays so that only sources with at least 20 detections included. As a brief justification - sources with zero detections should, obviously, be excluded from our array, while requiring 20 detections improves our ability to reliably measure periodicity.
Before we do this, we can examine which sources are most likely to be affected by this decision. For each source, we can plot the number of masked epochs (i.e. non-detections) as a function of that source's brightness.
Step9: From this plot a few things are immediately clear
Step10: Now that we have eliminated the poorly sampled light curves, we can also if the typical uncertainties measured by SExtractor are properly estimated by comparing their values to the typical scatter in a given light curve. For non-variable stars the scatter should be approximately equal to the mean uncertainty measurement for a given star.
Step11: At the bright end, corresponding to sources brighter than 19th mag, we see that the typical scatter is larger than the mean uncertainty measurement. We can improve the scatter, however, so we will re-investigate this feature later. You will also notice that at the faint end the scatter is typically smaller than the mean uncertainty. This occurs because the light curves produced by our methodology are biased - in particular, the faint sources are more likely to be detected in epochs where they are a little brighter than normal and less likely to be detected in epochs where they are a little fainter than normal. As a result, summary statistics for these sources (essentially everything fainter than 20th mag if you scroll up two plots), will be misleading.
We can also plot the typical scatter as a function of magnitude. This diagnostic for the photometric performance of a time-domain survey is the most common plot that you'll find in the literature.
Note - (1) here we take standard deviation of a log quantity, mag. This will overestimate the true value of the scatter at lowish S/N. It’s always best to compute stats in flux space then convert to mag. For simplicity we skip that here. Further examples of the dangers of statistical inference from mag measures can be found on Frank Masci's website.(2) Non-detections on the faint end artificially supress the overall scatter.
Step12: This plot shows that for a typical star ($R < 19$ mag), we can achieve a scatter of $\sim 0.08$ mag. As has already been noted - this performance is poor for stars this bright with a telescope as large as P48.
Problem 3) Calculate Differential Photometry Corrections
Why is the scatter so large for PTF light curves?
There are two reasons this is the case
Step13: We can now use the relative_photometry function to calculate the $\Delta m$ for each epoch.
Step14: To quickly see the effect of applying the $\Delta m$ corrections, we can once again plot the light curve of the source that we previously examined.
Step15: Wow! It is now pretty clear that this source isn't a variable. The variations appear more or less consistent with Gaussian noise, and the scatter for this source has decreased by a factor of $\sim 2$. That is a significant improvement over what we obtained when using the "raw" values from the PTF SExtractor catalogs.
Once again, the scatter as a function of magnitude will provide a decent proxy for the overall quality of the light curves.
Step16: This looks much, much better than what we had before, where all the bright stars had a scatter of $\sim 0.08$ mag. Now, the brightest stars have a scatter as small as $\sim 0.007$ mag, while even stars as faint as $R = 19$ mag have scatter $< 0.01$ mag. In other words, we now have good quality light curves (good enough for publication in many cases, though caution should always always always be applied to large survey data).
Problem 4) Store, and Later Access, the Light Curves
As we now have high quality light curves, it is important that we store the results of our work. We will do that using the shelve module within Python which will allow us to quickly and easily access each of these light curves in the future.
Step17: Loading the shelf file is fast and easy.
Step19: Finally, we have created a function, which we will use during the next few days, to produce the light curve for a source at a given RA and Dec on ptfField 22683 ccd 06. The function is below, and it loads the shelf file, performs a cross match against the user-supplied RA and Dec, and returns the light curve if there is a source with a separation less than 1 arcsec from the user-supplied position.
Step20: Problem 1 Test the source_lightcurve function - load the light curve for the star located at $\alpha_{\mathrm J2000} =$ 20 | Python Code:
reference_catalog = '../data/PTF_Refims_Files/PTF_d022683_f02_c06_u000114210_p12_sexcat.ctlg'
# select R-band data (f02)
Explanation: Hands-On Exercise 2: Making a Lightcurve from PTF catalog data
Version 0.2
This "hands-on" session will proceed differently from those that are going to follow. Below, we have included all of the code that is necessary to create light curves from PTF SExtractor catalogs. (For additional information on SExtractor please consult the SExtractor manual. This manual is far from complete, however, so you may want to also consult SExtractor For Dummies.) You will not need to write any software, but we will still go through everything step by step so you can see the details of how the light curves are constructed.
As we saw in the previous talk, there are many different ways to make photometric measurements, which are necessary to ultimately create a light curve. In brief, the procedure below matches sources across epochs (i.e. different observations) by comparing everything to a deep reference catalog. Photometric corrections (i.e. differential photometry) are then calculated based on how much the aperture photometry on each epoch differs from the reference image.
This notebook will include commands necessary to load and manipulate PTF data, as well as a procedure that is needed to make differential corrections to the light curves.
By EC Bellm and AA Miller (c) 2015 Aug 05
Problem 1) Load Source Information from the Reference Catalog
Our first step is create a "master" source list based on the reference catalog. We adopt the reference image for this purpose for two reasons: most importantly, (i) PTF reference images are made from stacks of the individual exposures so they are typically significantly deeper than individual exposures, and (ii) the reference images cover a larger footprint than the individual exposures.
First, we provide the path to the reference catalog and store it in reference_catalog.
End of explanation
hdus = fits.open(reference_catalog)
data = hdus[1].data
data.columns
Explanation: There is a lot of information for each source, and the overall image, in each of these catalog files. As a demonstration of the parameters available for each source, we will next load the file and show each of the parameters. Note - for a detailed explanation for the definition of each of these columns, please refer to the SExtractor documentation links above.
End of explanation
def load_ref_catalog(reference_catalog):
hdus = fits.open(reference_catalog)
data = hdus[1].data
# filter flagged detections
w = ((data['flags'] & 506 == 0) & (data['MAG_AUTO'] < 99))
data = data[w]
ref_coords = coords.SkyCoord(data['X_WORLD'], data['Y_WORLD'],frame='icrs',unit='deg')
star_class = np.array(data["CLASS_STAR"]).T
return np.vstack([data['MAG_AUTO'],data['MAGERR_AUTO']]).T, ref_coords, star_class
Explanation: In the next step, we define a function that will read the catalog. The main SExtractor parameters that we will need are: MAG_AUTO and MAGERR_AUTO, the mag and mag uncertainty, respectively, as well as X_WORLD and Y_WORLD, which are the RA and Dec, respectively, and finally flags, which contains processing flags. After reading the catalog, the function will select sources that have no flags, and return the position of these sources, their brightness, as well as a SExtractor parameter CLASS_STAR, which provides a numerical estimation of whether or not a source is a star. Sources with CLASS_STAR $\sim 1$ are likely stars, and sources with CLASS_STAR $\sim 0$ are likely galaxies, but beware that this classification is far from perfect, especially at the faint end. Recall that galaxies cannot be used for differential photometry as they are resolved.
End of explanation
ref_mags, ref_coords, star_class = load_ref_catalog(reference_catalog)
epoch_catalogs = glob('../data/PTF_Procim_Files/PTF*f02*.ctlg.gz') # Note - files have been gzipped to save space
print("There are {:d} sources in the reference image".format( len(ref_mags) ))
print("...")
print("There are {:d} epochs for this field".format( len(epoch_catalogs) ))
Explanation: Now, we can run the function, and determine the number of sources in our reference catalog.
Following that, we will use the Python function glob to grab all of the individual SExtractor catalogs. These files contain the epoch by epoch photometric measurements of the sources in ptfField 22683 ccd 06. The file names will be stored as epoch_catalogs.
End of explanation
def crossmatch_epochs(reference_coords, epoch_catalogs):
n_stars = len(reference_coords)
n_epochs = len(epoch_catalogs)
mags = np.ma.zeros([n_stars, n_epochs])
magerrs = np.ma.zeros([n_stars, n_epochs])
mjds = np.ma.zeros(n_epochs)
with astropy.utils.console.ProgressBar(len(epoch_catalogs),ipython_widget=True) as bar:
for i, catalog in enumerate(epoch_catalogs):
hdus = fits.open(catalog)
data = hdus[1].data
hdr = hdus[2].header
# filter flagged detections
w = ((data['flags'] & 506 == 0) & (data['imaflags_iso'] & 1821 == 0))
data = data[w]
epoch_coords = coords.SkyCoord(data['X_WORLD'], data['Y_WORLD'],frame='icrs',unit='deg')
idx, sep, dist = coords.match_coordinates_sky(epoch_coords, reference_coords)
wmatch = (sep <= 1.5*u.arcsec)
# store data
if np.sum(wmatch):
mags[idx[wmatch],i] = data[wmatch]['MAG_APER'][:,2] + data[wmatch]['ZEROPOINT']
magerrs[idx[wmatch],i] = data[wmatch]['MAGERR_APER'][:,2]
mjds[i] = hdr['OBSMJD']
bar.update()
return mjds, mags, magerrs
Explanation: Problem 2) Match Individual Detections to Reference Catalog Sources
The next step towards constructing light curves is one of the most difficult: source association. From the reference catalog, we know the positions of the stars and the galaxies in ptfField 22683 ccd 06. The positions of these stars and galaxies as measured on the individual epochs will be different than the positions measured on the reference image, so we need to decide how to associate the two.
Simply put, we will crossmatch the reference catalog and individual epoch catalogs, and consider all associations with a separation less than our tolerance to be a match. For the most part, this is the standard procedure for source association, and we will adopt a tolerance of 1.5 arcsec (the most common value is 1 arcsec).
We will use astropy to crossmatch sources between the two catalogs, and we will perform a loop over every catalog so we can build up lightcurves for the individual sources. To store the data, we will construct a two-dimenstional NumPy mask array. Each row in the array will represent a source in the reference catalog, while each column will represent each epoch. Thus, each source's light curve can be read by examining the corresponding row of the mags array. We will also store the uncertainty of each mag measurement in magerrs. The date corresponding to each column will be stored in a separate 1D array: mjds. Finally, including the masks allows us to track when a source is not detected in an individual exposure.
Note - there are some downsides to this approach: (i) crossmatching to sources in the reference catalog means we will miss any transients in this field as they are (presumably) not in the reference image. (ii) The matching tolerance of 1.5 arcsec is informed [0.01 arcsec is way too small and 100 arcsec is way too big], but arbitrary. Is a source separation of 1.49 arcsec much more significant than a source separation of 1.51 arcsec? While it is more significant, a binary decision threshold at 1.5 is far from perfect. (iii) This procedure assumes that the astrometric information for each catalog is correct. While this is true for the vast, vast majority of PTF images, there are some fields ($< 1\%$) where the astrometric solution can be incorrect by more than a few arcsec.
End of explanation
mjds,mags,magerrs = crossmatch_epochs(ref_coords, epoch_catalogs)
Explanation: With the function defined, we now populate and store the arrays with the light curve information.
End of explanation
# mask obviously bad mags
wbad = (mags < 10) | (mags > 25)
mags[wbad] = np.ma.masked
magerrs[wbad] = np.ma.masked
Explanation: At times, SExtrator will produce "measurements" that are clearly non-physical, such as magnitude measurements of 99 (while a source may be that faint, we cannot detect such a source with PTF). We will mask everything with a clearly wrong magnitude measurement.
End of explanation
source_idx = 62
plt.errorbar(mjds, mags[source_idx,:],magerrs[source_idx,:],fmt='none')
plt.ylim(np.ma.max(mags[source_idx,:])+0.3, np.ma.min(mags[source_idx,:])-0.2)
plt.xlabel("MJD")
plt.ylabel("R mag")
print("scatter = {:.3f}".format(np.ma.std(mags[source_idx,:])))
Explanation: Now that we have performed source assoiciation and populated the mags array, we can plot light curves of individual sources. Here is an example for the 63rd source in the array (recall that NumPy arrays are zero indexed).
End of explanation
n_epochs = len(epoch_catalogs)
plt.scatter(ref_mags[:,0], np.ma.sum(mags.mask,axis=1), alpha=0.1, edgecolor = "None")
plt.plot([13, 22], [n_epochs - 20, n_epochs - 20], 'DarkOrange') # plot boundary for sources with Ndet > 20
plt.xlabel('R mag', fontsize = 13)
plt.ylabel('# of masked epochs', fontsize = 13)
plt.tight_layout()
Explanation: Note that the scatter for this source is $\sim 0.11$ mag. We will later show this to be the case, but for now, trust us that this scatter is large for a source with average brightness $\sim 18.6$ mag. Either, this is a genuine variable star, with a significant decrease in brightness around MJD 56193, or, this procedure, so far, is poor.
For reasons that will become clear later, we are now going to filter our arrays so that only sources with at least 20 detections included. As a brief justification - sources with zero detections should, obviously, be excluded from our array, while requiring 20 detections improves our ability to reliably measure periodicity.
Before we do this, we can examine which sources are most likely to be affected by this decision. For each source, we can plot the number of masked epochs (i.e. non-detections) as a function of that source's brightness.
End of explanation
Ndet20 = n_epochs - np.ma.sum(mags.mask,axis=1) >= 20
mags = mags[Ndet20]
magerrs = magerrs[Ndet20]
ref_mags = ref_mags[Ndet20]
ref_coords = ref_coords[Ndet20]
star_class = star_class[Ndet20]
print('There are {:d} sources with > 20 detections on individual epochs.'.format( sum(Ndet20) ))
Explanation: From this plot a few things are immediately clear: (i) potentially saturated sources ($R \lesssim 14$ mag) are likely to have fewer detections (mostly because they are being flagged by SExtractor), (ii) faint sources ($R \gtrsim 20$ mag) are likely to have fewer detecions (because the limiting magnitude of individual PTF exposures is $\sim 20.5$ mag, and (iii) the faintest sources are the most likely to have light curves with very few points.
Identifying sources with at least 20 epochs can be done using a conditional statement, and we will store the Boolean results of this conditional statement in an array Ndet20. We will use this array to remove sources with fewer than 20 detections in their light curves.
End of explanation
plt.scatter(ref_mags[:,0], np.ma.std(mags,axis=1)**2. - np.ma.mean(magerrs**2.,axis=1),
edgecolor = "None", alpha = 0.2)
plt.ylim(-0.2,0.5)
plt.yscale('symlog', linthreshy=0.01)
plt.xlabel('R (mag)', fontsize = 13)
plt.ylabel(r'$ std(m)^2 - <\sigma_m^2>$', fontsize = 14)
Explanation: Now that we have eliminated the poorly sampled light curves, we can also if the typical uncertainties measured by SExtractor are properly estimated by comparing their values to the typical scatter in a given light curve. For non-variable stars the scatter should be approximately equal to the mean uncertainty measurement for a given star.
End of explanation
# examine a plot of the typical scatter as a function of magnitude
plt.scatter(ref_mags[:,0], np.ma.std(mags,axis=1),alpha=0.1)
plt.ylim(0.005,0.5)
plt.yscale("log")
plt.xlabel('R (mag)', fontsize = 13)
plt.ylabel(r'$std(m)$', fontsize = 14)
Explanation: At the bright end, corresponding to sources brighter than 19th mag, we see that the typical scatter is larger than the mean uncertainty measurement. We can improve the scatter, however, so we will re-investigate this feature later. You will also notice that at the faint end the scatter is typically smaller than the mean uncertainty. This occurs because the light curves produced by our methodology are biased - in particular, the faint sources are more likely to be detected in epochs where they are a little brighter than normal and less likely to be detected in epochs where they are a little fainter than normal. As a result, summary statistics for these sources (essentially everything fainter than 20th mag if you scroll up two plots), will be misleading.
We can also plot the typical scatter as a function of magnitude. This diagnostic for the photometric performance of a time-domain survey is the most common plot that you'll find in the literature.
Note - (1) here we take standard deviation of a log quantity, mag. This will overestimate the true value of the scatter at lowish S/N. It’s always best to compute stats in flux space then convert to mag. For simplicity we skip that here. Further examples of the dangers of statistical inference from mag measures can be found on Frank Masci's website.(2) Non-detections on the faint end artificially supress the overall scatter.
End of explanation
def relative_photometry(ref_mags, star_class, mags, magerrs):
#make copies, as we're going to modify the masks
all_mags = mags.copy()
all_errs = magerrs.copy()
# average over observations
refmags = np.ma.array(ref_mags[:,0])
madmags = 1.48*np.ma.median(np.abs(all_mags - np.ma.median(all_mags, axis = 1).reshape(len(ref_mags),1)), axis = 1)
MSE = np.ma.mean(all_errs**2.,axis=1)
# exclude bad stars: highly variable, saturated, or faint
# use excess variance to find bad objects
excess_variance = madmags**2. - MSE
wbad = np.where((np.abs(excess_variance) > 0.1) | (refmags < 14.5) | (refmags > 17) | (star_class < 0.9))
# mask them out
refmags[wbad] = np.ma.masked
# exclude stars that are not detected in a majority of epochs
Nepochs = len(all_mags[0,:])
nbad = np.where(np.ma.sum(all_mags > 1, axis = 1) <= Nepochs/2.)
refmags[nbad] = np.ma.masked
# for each observation, take the median of the difference between the median mag and the observed mag
# annoying dimension swapping to get the 1D vector to blow up right
relative_zp = np.ma.median(all_mags - refmags.reshape((len(all_mags),1)),axis=0)
return relative_zp
Explanation: This plot shows that for a typical star ($R < 19$ mag), we can achieve a scatter of $\sim 0.08$ mag. As has already been noted - this performance is poor for stars this bright with a telescope as large as P48.
Problem 3) Calculate Differential Photometry Corrections
Why is the scatter so large for PTF light curves?
There are two reasons this is the case:
We are measuring the scatter from fixed aperture measurements, but we have not accounted for the fact that the seeing varies image to image. We can correct for this via differential photometry, however.
The calibration of PTF images only works properly on nights with photometric conditions (see Ofek et al. 2012). Again, we can correct for this via differential photometry.
The basic idea for differential photometry is the following: using "standard" stars (what constitutes standard can be argued, but most importantly these should not be variable), small corrections to the photometry of every star in a given image are calculated in order to place the photometry from every epoch on the same relative zero-point. The corrections are determined by comparing the the "standard" stars to their mean (or median) value. Typically, the corrections are determined by averaging over a large number of stars.
The function relative_photometry, which is defined below, goes through this procedure to improve the quality of the PTF light curves. To calculate the $\Delta m$ required for each epoch, we take a few (essentially justified) short cuts: only stars with $R \ge 14.5$ mag are included to avoid saturation, further stars with $R > 17$ mag are excluded so only high SNR sources are used to calculate the corrections, sources with the SExtractor parameter CLASS_STAR $< 0.9$ (i.e. likely galaxies) are excluded, and sources with excess_variance $> 0.1$ (defined below) are excluded to remove likely variable stars. After these exclusions, the remaining stars are used to calculate the median difference between their reference magnitude and their brightness on the individual epochs.
End of explanation
# compute the relative photometry and subtract it. Don't fret about error propagation
rel_zp = relative_photometry(ref_mags, star_class, mags, magerrs)
mags -= np.ma.resize(rel_zp, mags.shape)
Explanation: We can now use the relative_photometry function to calculate the $\Delta m$ for each epoch.
End of explanation
source_idx = 18
plt.errorbar(mjds, mags[source_idx,:],magerrs[source_idx,:],fmt='none')
plt.ylim(np.max(mags[source_idx,:])+0.3, np.min(mags[source_idx,:])-0.05)
plt.xlabel("MJD")
plt.ylabel("R mag")
print("scatter = {:.3f}".format(np.ma.std(mags[source_idx,:])))
Explanation: To quickly see the effect of applying the $\Delta m$ corrections, we can once again plot the light curve of the source that we previously examined.
End of explanation
plt.scatter(ref_mags[:,0], np.ma.std(mags,axis=1),alpha=0.1, edgecolor = "None")
plt.ylim(0.003,0.7)
plt.yscale("log")
plt.xlim(13,22)
plt.xlabel('R (mag)', fontsize = 13)
plt.ylabel(r'$std(m)$', fontsize = 14)
Explanation: Wow! It is now pretty clear that this source isn't a variable. The variations appear more or less consistent with Gaussian noise, and the scatter for this source has decreased by a factor of $\sim 2$. That is a significant improvement over what we obtained when using the "raw" values from the PTF SExtractor catalogs.
Once again, the scatter as a function of magnitude will provide a decent proxy for the overall quality of the light curves.
End of explanation
# save the output: ref_coords, mjds, mags, magerrs.
outfile = reference_catalog.split('/')[-1].replace('ctlg','shlv')
shelf = shelve.open('../data/'+outfile,flag='c',protocol=pickle.HIGHEST_PROTOCOL)
shelf['mjds'] = mjds
shelf['mags'] = mags
shelf['magerrs'] = magerrs
shelf['ref_coords'] = ref_coords
shelf.close()
Explanation: This looks much, much better than what we had before, where all the bright stars had a scatter of $\sim 0.08$ mag. Now, the brightest stars have a scatter as small as $\sim 0.007$ mag, while even stars as faint as $R = 19$ mag have scatter $< 0.01$ mag. In other words, we now have good quality light curves (good enough for publication in many cases, though caution should always always always be applied to large survey data).
Problem 4) Store, and Later Access, the Light Curves
As we now have high quality light curves, it is important that we store the results of our work. We will do that using the shelve module within Python which will allow us to quickly and easily access each of these light curves in the future.
End of explanation
# demonstrate getting the data back out
shelf = shelve.open('../data/'+outfile)
for key in shelf.keys():
print(key, shelf[key].shape)
shelf.close()
Explanation: Loading the shelf file is fast and easy.
End of explanation
def source_lightcurve(rel_phot_shlv, ra, dec, matchr = 1.0):
Crossmatch ra and dec to a PTF shelve file, to return light curve of a given star
shelf = shelve.open(rel_phot_shlv)
ref_coords = coords.SkyCoord(shelf["ref_coords"].ra, shelf["ref_coords"].dec,frame='icrs',unit='deg')
source_coords = coords.SkyCoord(ra, dec,frame='icrs',unit='deg')
idx, sep, dist = coords.match_coordinates_sky(source_coords, ref_coords)
wmatch = (sep <= matchr*u.arcsec)
if sum(wmatch) == 1:
mjds = shelf["mjds"]
mags = shelf["mags"][idx]
magerrs = shelf["magerrs"][idx]
# filter so we only return good points
wgood = (mags.mask == False)
if (np.sum(wgood) == 0):
raise ValueError("No good photometry at this position.")
return mjds[wgood], mags[wgood], magerrs[wgood]
else:
raise ValueError("There are no matches to the provided coordinates within %.1f arcsec" % (matchr))
Explanation: Finally, we have created a function, which we will use during the next few days, to produce the light curve for a source at a given RA and Dec on ptfField 22683 ccd 06. The function is below, and it loads the shelf file, performs a cross match against the user-supplied RA and Dec, and returns the light curve if there is a source with a separation less than 1 arcsec from the user-supplied position.
End of explanation
ra, dec = 312.503802, -0.706603
source_mjds, source_mags, source_magerrs = source_lightcurve( # complete
plt.errorbar( # complete
plt.ylim( # complete
plt.xlabel( # complete
plt.ylabel( # complete
Explanation: Problem 1 Test the source_lightcurve function - load the light curve for the star located at $\alpha_{\mathrm J2000} =$ 20:50:00.91, $\delta_{\mathrm J2000} =$ -00:42:23.8. An image of this star can be found here. After loading the light curve for this star, plot its light curve, including the uncertainties on the individual epochs.
End of explanation |
9,068 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Functions
Making reusable blocks of code.
Starting point
Step1: What about for $a = 2$, $b = 8$, and $c = 1$?
Step3: Functions
Step5: Observe how this function works.
Step7: Summarize
Step10: Summarize
How do you get information into the function?
Modify
Alter the code below so it takes two arguments (a and b) and prints out both of them.
Step12: Predict
What does b=5 let you do?
Step14: b=5 allows you to define a default value for the argument.
How do you get information out of a function?
Step17: Summarize
How do you get information out of the function?
Modify
Alter the program below so it returns the calculated value.
Step19: To return multiple values, use commas
Step20: Implement
Write a function that uses the quadratic equation to find both roots of a polynomial for any $a$, $b$, and $c$. | Python Code:
## Code here
Explanation: Functions
Making reusable blocks of code.
Starting point:
In this exercise, we're going to calculate one of the roots from the quadratic formula:
$r_{p} = \frac{-b + \sqrt{b^{2} - 4ac}}{2a}$
Determine $r_{p}$ for $a = 1$, $b=4$, and $c=3$.
End of explanation
## Code here
Explanation: What about for $a = 2$, $b = 8$, and $c = 1$?
End of explanation
def square(x):
This function will square x.
return x*x
s = square(5)
print(s)
Explanation: Functions:
Code can be organized into functions.
Functions allow you to wrap a piece of code and use it over and over.
Makes code reusable
Avoid having to re-type the same code (each time, maybe making an error)
Observe how this function works.
End of explanation
import math
def hypotenuse(y,theta):
Return a hypotenuse given y and theta in radians.
return math.sin(theta)*y
h = hypotenuse(1,math.pi/2)
print(h)
Explanation: Observe how this function works.
End of explanation
def some_function(ARGUMENT):
Print out ARGUMENT.
return 1
print(ARGUMENT)
some_function(10)
some_function("test")
Explanation: Summarize:
What is the syntax for defining a function?
How do you get information into a function?
End of explanation
def some_function(a):
print out a
print(a)
def some_function(a,b):
print out a and b
print(a,b)
Explanation: Summarize
How do you get information into the function?
Modify
Alter the code below so it takes two arguments (a and b) and prints out both of them.
End of explanation
def some_function(a,b=5,c=7):
Print a and b.
print(a,b,c)
some_function(1,c=2)
some_function(1,2)
some_function(a=5,b=4)
Explanation: Predict
What does b=5 let you do?
End of explanation
def some_function(a):
Multiply a by 5.
return a*5
print(some_function(2))
print(some_function(80.5))
x = some_function(5)
print(x)
Explanation: b=5 allows you to define a default value for the argument.
How do you get information out of a function?
End of explanation
def some_function(a,b):
Sum up a and b.
v = a + b
return v
v = some_function(1,2)
print(v)
def some_function(a,b):
Sum up a and b.
v = a + b
return v
Explanation: Summarize
How do you get information out of the function?
Modify
Alter the program below so it returns the calculated value.
End of explanation
def some_function(a):
Multiply a by 5 and 2.
return a*5, a*2
x, y = some_function(5)
print(x)
Explanation: To return multiple values, use commas:
End of explanation
## Code here
Explanation: Implement
Write a function that uses the quadratic equation to find both roots of a polynomial for any $a$, $b$, and $c$.
End of explanation |
9,069 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Calculate performance of kWIP
The next bit of python code calculates the performance of kWIP against the distance between samples calulcated from the alignments of their genomes.
This code caluclates spearman's $\rho$ between the off-diagonal elements of the triagnular distance matrices.
Step1: Statistical analysis
Is done is R, as that's easier.
Below we see a summary and structure of the data
Step2: Experiment design
Below we see the design of the experiment in terms of the two major variables.
We have a series (vertically) that, at 30x coverage, looks at the effect of genetic variation on performance. There is a second series that examines the effect of coverage at an average pairwise genetic distance of 0.001.
There are 100 replicates for each data point, performed as a separate bootstrap across the random creation of the tree and sampling of reads etc.
Step3: Effect of Coverage
Here we show the spread of data across the 100 reps as boxplots per metric and covreage level.
I note that the weighted product seems slightly more variable, particularly at higher coverage. Though the median is nearly always higher | Python Code:
expts = list(map(lambda fp: path.basename(fp.rstrip('/')), glob('data/*/')))
print("Number of replicate experiments:", len(expts))
def process_expt(expt):
expt_results = []
def extract_info(filename):
return re.search(r'kwip/(\d\.?\d*)x-(0\.\d+)-(wip|ip).dist', filename).groups()
# dict of scale: distance matrix, populated as we go
truths = {}
for distfile in glob("data/{}/kwip/*.dist".format(expt)):
cov, scale, metric = extract_info(distfile)
if scale not in truths:
genome_dist_path = 'data/{ex}/all_genomes-{sc}.dist'.format(ex=expt, sc=scale)
truths[scale] = load_sample_matrix_to_runs(genome_dist_path)
exptmat = DistanceMatrix.read(distfile)
rho = distmat_corr(truths[scale], exptmat, stats.spearmanr).correlation
expt_results.append({
"coverage": cov,
"scale": scale,
"metric": metric,
"rho": rho,
"seed": expt,
})
return expt_results
#process_expt('3662')
results = []
for res in map(process_expt, expts):
results.extend(res)
results = pd.DataFrame(results)
Explanation: Calculate performance of kWIP
The next bit of python code calculates the performance of kWIP against the distance between samples calulcated from the alignments of their genomes.
This code caluclates spearman's $\rho$ between the off-diagonal elements of the triagnular distance matrices.
End of explanation
%%R -i results
results$coverage = as.numeric(as.character(results$coverage))
results$scale = as.numeric(as.character(results$scale))
print(summary(results))
str(results)
Explanation: Statistical analysis
Is done is R, as that's easier.
Below we see a summary and structure of the data
End of explanation
%%R
ggplot(results, aes(x=coverage, y=scale)) +
geom_point() +
scale_x_log10() +
scale_y_log10() +
theme_bw()
Explanation: Experiment design
Below we see the design of the experiment in terms of the two major variables.
We have a series (vertically) that, at 30x coverage, looks at the effect of genetic variation on performance. There is a second series that examines the effect of coverage at an average pairwise genetic distance of 0.001.
There are 100 replicates for each data point, performed as a separate bootstrap across the random creation of the tree and sampling of reads etc.
End of explanation
%%R
dat = results %>%
filter(scale==0.001, coverage<=30) %>%
select(rho, metric, coverage)
dat$coverage = as.factor(dat$coverage)
ggplot(dat, aes(x=coverage, y=rho, fill=metric)) +
geom_boxplot(aes(fill=metric))
%%R
# AND AGAIN WITHOUT SUBSETTING
dat = results %>%
filter(scale==0.001) %>%
select(rho, metric, coverage)
dat$coverage = as.factor(dat$coverage)
ggplot(dat, aes(x=coverage, y=rho, fill=metric)) +
geom_boxplot(aes(fill=metric)) +
theme_bw()
%%R
dat = subset(results, scale==0.001, select=-scale)
ggplot(dat, aes(x=coverage, y=rho, colour=seed, linetype=metric)) +
geom_line() +
scale_x_log10()
%%R
summ = results %>%
filter(scale==0.001) %>%
select(-scale) %>%
group_by(coverage, metric) %>%
summarise(rho_av=mean(rho), rho_err=sd(rho))
p = ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Spearman's ", rho, " +- SD"))) +
#scale_x_log10()+
#ggtitle("Performance of WIP & IP") +
theme_bw()
pdf("coverage-vs-rho_full.pdf",width=7, height=4)
print(p)
dev.off()
p
%%R
summ = results %>%
filter(scale==0.001, coverage <= 50) %>%
select(-scale) %>%
group_by(coverage, metric) %>%
summarise(rho_av=mean(rho), rho_err=sd(rho))
p = ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Spearman's ", rho, " +- SD"))) +
#scale_x_log10()+
#ggtitle("Performance of WIP & IP") +
theme_bw()
pdf("coverage-vs-rho_50x.pdf",width=5, height=4)
print(p)
dev.off()
p
%%R
sem <- function(x) sqrt(var(x,na.rm=TRUE)/length(na.omit(x)))
summ = results %>%
filter(scale==0.001) %>%
select(-scale) %>%
group_by(coverage, metric) %>%
summarise(rho_av=mean(rho), rho_err=sem(rho))
ggplot(summ, aes(x=coverage, y=rho_av, ymin=rho_av-rho_err, ymax=rho_av+rho_err, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Spearman's ", rho))) +
scale_x_log10()+
theme_bw()
%%R
cov_diff = results %>%
filter(scale==0.001) %>%
select(rho, metric, coverage, seed) %>%
spread(metric, rho) %>%
mutate(diff=wip-ip) %>%
select(coverage, seed, diff)
print(summary(cov_diff))
p = ggplot(cov_diff, aes(x=coverage, y=diff, colour=seed)) +
geom_line() +
scale_x_log10() +
ggtitle("Per expt difference in performance (wip - ip)")
print(p)
summ = cov_diff %>%
group_by(coverage) %>%
summarise(diff_av=mean(diff), diff_sd=sd(diff))
ggplot(summ, aes(x=coverage, y=diff_av, ymin=diff_av-diff_sd, ymax=diff_av+diff_sd)) +
geom_line() +
geom_ribbon(alpha=0.2) +
xlab('Genome Coverage') +
ylab(expression(paste("Improvment in Spearman's ", rho, " (wip - IP)"))) +
scale_x_log10() +
theme_bw()
%%R
var = results %>%
filter(coverage == 10, scale <= 0.05) %>%
select(metric, rho, scale)
var$scale = as.factor(as.character(var$scale))
str(var)
ggplot(var, aes(x=scale, y=rho, fill=metric)) +
geom_boxplot(aes(fill=metric)) +
xlab('Mean pairwise variation') +
ylab(expression(paste("Spearman's ", rho))) +
theme_bw()
%%R
summ = results %>%
filter(coverage == 10, scale <= 0.04) %>%
select(-coverage) %>%
group_by(scale, metric) %>%
summarise(rho_av=mean(rho), rho_sd=sd(rho))
str(summ)
p = ggplot(summ, aes(x=scale, y=rho_av, ymin=rho_av-rho_sd, ymax=rho_av+rho_sd, group=metric)) +
geom_line(aes(linetype=metric)) +
geom_ribbon(aes(fill=metric), alpha=0.2) +
xlab(expression(paste('Mean pairwise variation (', pi, ')'))) +
ylab(expression(paste("Spearman's ", rho, " +- SD"))) +
scale_x_log10()+
theme_bw()
pdf("pi-vs-performance.pdf",width=5, height=4)
print(p)
dev.off()
p
%%R
var_diff = results %>%
filter(coverage==10) %>%
select(rho, metric, scale, seed) %>%
spread(metric, rho) %>%
mutate(diff=wip-ip) %>%
select(scale, seed, diff)
summ_var_diff = var_diff %>%
group_by(scale) %>%
summarise(diff_av=mean(diff), diff_sd=sd(diff))
%%R
p = ggplot(var_diff, aes(x=scale, y=diff, colour=seed)) +
geom_line() +
scale_x_log10() +
ggtitle("Per expt difference in performance (wip - ip)")
print(p)
%%R
ggplot(summ_var_diff, aes(x=scale, y=diff_av, ymin=diff_av-diff_sd, ymax=diff_av+diff_sd)) +
geom_line() +
geom_ribbon(alpha=0.2) +
xlab('Average variants/site') +
ylab(expression(paste("Improvment in Spearman's ", rho, " (wip - IP)"))) +
scale_x_log10() +
theme_bw()
Explanation: Effect of Coverage
Here we show the spread of data across the 100 reps as boxplots per metric and covreage level.
I note that the weighted product seems slightly more variable, particularly at higher coverage. Though the median is nearly always higher
End of explanation |
9,070 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transfer function of a position sensor.
Andrés Marrugo, PhD
Universidad Tecnológica de Bolívar.
The transfer function of a small position sensor is evaluated experimentally. The sensor is made of a very small magnet and the position with respect to the centerline (see Figure 2.24) is sensed by the horizontal, restoring force on the magnet. The magnet is held at a fixed distance, $h$, from the iron plate. The measurements are given in the table below.
| | | | | | | | | |
|---------------------- |---- |------- |------- |------- |------- |------- |------- |------- |
| Displacement, d [mm] | 0 | 0.08 | 0.16 | 0.24 | 0.32 | 0.4 | 0.48 | 0.52 |
| Force [mN] | 0 | 0.576 | 1.147 | 1.677 | 2.187 | 2.648 | 3.089 | 3.295 |
| | | | | | | | | |
Fig. 2.24 A simple position sensor.
Find the linear transfer function that best fits these data.
Find a transfer function in the form of a second-order polynomial ($y = a+bf+cf^2$), where $y$ is the displacement and $f$ is the restoring force by evaluating the constants $a$, $b$, and $c$.
Plot the original data together with the transfer functions in (1) and (2) and discuss the errors in the choice of approximation.
Solution
Let's begin by plotting the data.
Step1: We can see that the data is approximately linear, but not quite.
The linear transfer fuction that best fits the data is found by performing a linear fit in the least squares sense. If we go back to the book and review how to carry out linear approximation of nonlinear transfer functions.
Step2: We see that we need to fit the data to a line with equation $d=af+b$, and we need to compute the coefficients $a$ and $b$ that provides a best fit in the least squares sense.
To do this in python we use the polyfit function.
Step3: We have obtained the linear fit to the data. Several points are not exactly on the line, therefore there's always an error with respect to the ideal transfer function. Probably a second order fit might be better.
For the transfer function in (2), $y = a+bf+cf^2$, we have to find $a$, $b$, and $c$.
Step4: Now we plot both transfer functions.
Step5: Now let's compute the errors. | Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
d = np.array([0,0.08,0.16,0.24,0.32,0.4,0.48,0.52])
f = np.array([0,0.576,1.147,1.677,2.187,2.648,3.089,3.295])
plt.plot(f,d,'*')
plt.ylabel('Displacement [mm]')
plt.xlabel('Force [mN]')
plt.show()
Explanation: Transfer function of a position sensor.
Andrés Marrugo, PhD
Universidad Tecnológica de Bolívar.
The transfer function of a small position sensor is evaluated experimentally. The sensor is made of a very small magnet and the position with respect to the centerline (see Figure 2.24) is sensed by the horizontal, restoring force on the magnet. The magnet is held at a fixed distance, $h$, from the iron plate. The measurements are given in the table below.
| | | | | | | | | |
|---------------------- |---- |------- |------- |------- |------- |------- |------- |------- |
| Displacement, d [mm] | 0 | 0.08 | 0.16 | 0.24 | 0.32 | 0.4 | 0.48 | 0.52 |
| Force [mN] | 0 | 0.576 | 1.147 | 1.677 | 2.187 | 2.648 | 3.089 | 3.295 |
| | | | | | | | | |
Fig. 2.24 A simple position sensor.
Find the linear transfer function that best fits these data.
Find a transfer function in the form of a second-order polynomial ($y = a+bf+cf^2$), where $y$ is the displacement and $f$ is the restoring force by evaluating the constants $a$, $b$, and $c$.
Plot the original data together with the transfer functions in (1) and (2) and discuss the errors in the choice of approximation.
Solution
Let's begin by plotting the data.
End of explanation
from IPython.display import IFrame
IFrame('../pdfs/linear-approximation.pdf',
width='100%', height=400)
Explanation: We can see that the data is approximately linear, but not quite.
The linear transfer fuction that best fits the data is found by performing a linear fit in the least squares sense. If we go back to the book and review how to carry out linear approximation of nonlinear transfer functions.
End of explanation
# polyfit computes the coefficients a and b of degree=1
a,b = np.polyfit(f,d,1)
print('The coefficients are a =',a,'b =',b)
d1 = a*f+b
plt.plot(d1,f,':b',label='Fitted line')
plt.plot(d,f,'*')
plt.ylabel('Displacement [mm]')
plt.xlabel('Force [mN]')
plt.axis([0,0.6,0,3])
plt.show()
Explanation: We see that we need to fit the data to a line with equation $d=af+b$, and we need to compute the coefficients $a$ and $b$ that provides a best fit in the least squares sense.
To do this in python we use the polyfit function.
End of explanation
# polyfit computes the coefficients a and b of degree=1
c2,b2,a2 = np.polyfit(f,d,2)
print('The coefficients are a =',a2,'b =',b2,'c =',c2)
Explanation: We have obtained the linear fit to the data. Several points are not exactly on the line, therefore there's always an error with respect to the ideal transfer function. Probably a second order fit might be better.
For the transfer function in (2), $y = a+bf+cf^2$, we have to find $a$, $b$, and $c$.
End of explanation
d2 = a2+b2*f+c2*f**2
tf2=plt.plot(f,d2,'--k',label='2nd order fit')
tf1=plt.plot(f,d1,':b',label='Linear fit')
tf0=plt.plot(f,d,'*',label='Exact output')
plt.ylabel('Displacement [mm]')
plt.xlabel('Force [mN]')
plt.legend(loc='upper left')
plt.show()
Explanation: Now we plot both transfer functions.
End of explanation
# Linear fit error
error1 = np.sum(np.abs(d - d1))/len(d)
error1_max = np.max(np.abs(d - d1))
print('Mean error linear fit: ', error1)
print('Max error linear fit: ', error1_max)
# Error fitting to a second order degree polynomial
error2 = np.sum(np.abs(d - d2))/len(d)
error2_max = np.max(np.abs(d - d2))
print('Mean error 2nd order fit: ',error2)
print('Max error 2nd order fit: ',error2_max)
Explanation: Now let's compute the errors.
End of explanation |
9,071 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Módulo 2
Step9: Construção
Series
Step20: DataFrame
Step21: Acessando valores
Definição das Variáveis
Step22: Slicing
Series
Step30: DataFrame
Step33: * Atribuição de Valores em DataFrames
Step36: Masks
Conceito
Step38: Aplicação
* Series
Step41: * DataFame
Step45: Operações Vetoriais
Definição das Variáveis
Step46: Manipulações Numéricas
* Incrementando o Preço Unitário
Step47: * Desconto de 10% no Preço Unitário
Step48: * Cálculo do Preço Total por Item
Step49: * Cálculo do Preço por Kg
Step50: * Preenchendo NaNs
Step51: * Soma
Step52: * Média
Step53: * Desvio Padrão
Step54: * Mediana
Step55: * Moda (valores mais frequentes)
Step56: Análise de Dados
Definição das Variáveis
Step61: Descrição dos dados
Step63: Desafio 1
Objetivo
Step65: Dataset Original
O dataset original é uma Series com 1000 elementos cujos dados pertençam a uma distribuição normal de média igual a 150 e desvio padrão 10.
Construção
Step66: O Acumulador
O acumulador é um DataFrame usado para acumular as transformações feitas em cima do dataset original. Cada transformação será armazenada em uma coluna cujo nome descreve a transformação feita sobre os dados.
Insira o dataset criado na coluna de nome original.
Step68: Inserção de dados
Para cada item a seguir, crie um dataset de distribuição normal contendo N elementos, usando a média e o sigma também fornecidos pelo item.
Em seguida, concatene os novos elementos gerados à Series original usando o código abaixo
Step70: [ B ] Elementos de outra distribuição
N = 100
média = 400
sigma = 100
coluna = "outliers_adicionados"
Step72: [ C ] Elementos Próximos à média
N = 1000
média = 150
sigma = 0.1
coluna = "elementos_prox_a_media"
Step74: Avaliação das Séries
Step75: Desafio 2
Objetivo
Step77: Dataset Codificado | Python Code:
import numpy as np
import pandas as pd
Explanation: Módulo 2: Introdução à Lib Pandas
Tutorial
Imports para a Aula
End of explanation
Construtor padrão
pd.Series(
name="Compras",
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
data=[2, 12, 1, 5, 2]
)
Construtor padrão: dados desconhecidos
pd.Series(
name="Compras",
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
)
Construtor padrão: valor padrão
pd.Series(
name="Compras",
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
data="fill here"
)
Recebendo um Dicionário
s = pd.Series({"Leite": 2, "Ovos": 12, "Carne": 1, "Arroz": 5, "Feijão": 2})
s.name = "Compras"
s
Recebendo uma Lista
s = pd.Series([2, 12, 1, 5, 2])
s
editando parâmetros
s.name="Compras"
s.index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
s
Ordenação: Índices
s.sort_index()
Ordenação: Dados
s.sort_values(ascending=False)
Explanation: Construção
Series
End of explanation
Construtor padrão
pd.DataFrame(
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
columns=["quantidade", "unidade"],
data=[
[ 2, "L"],
[12, "Ud"],
[ 1, "Kg"],
[ 5, "Kg"],
[ 2, "Kg"]
]
)
Construtor padrão: dados desconhecidos
pd.DataFrame(
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
columns=["quantidade", "unidade"]
)
Construtor padrão: valor padrão
pd.DataFrame(
index=["Leite", "Ovos", "Carne", "Arroz", "Feijão"],
columns=["quantidade", "unidade"],
data="?"
)
Recebendo um Dicionário
pd.DataFrame(
{
"quantidade": {
"Leite": 2,
"Ovos": 12,
"Carne": 1,
"Arroz": 5,
"Feijão": 2
},
"unidade": {
"Leite": "L",
"Ovos": "Ud",
"Carne": "Kg",
"Arroz": "Kg",
"Feijão": "Kg"
}
}
)
Recebendo um Dicionário de Series
index = ["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
pd.DataFrame(
{
"quantidade": pd.Series(index=index, data=[2, 12, 1, 5, 2]),
"unidade": pd.Series(index=index, data=["L", "Ud", "Kg", "Kg", "Kg"])
}
)
Recebendo um vetor de Series
index = ["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
df = pd.DataFrame(
[
pd.Series(name="quantidade", index=index, data=[2, 12, 1, 5, 2]),
pd.Series(name="unidade", index=index, data=["L", "Ud", "Kg", "Kg", "Kg"])
]
)
df
Transpondo para ajustar a Tabela
df = df.T
df
editando parâmetros
df.index = ["Leite tipo A", "Ovos Orgânicos", "Patinho", "Arroz Arbóreo", "Feijão Preto"]
df.columns = ["Quantidade", "Unidade"]
df
Ordenação: Índices
df.sort_index()
Ordenação: Dados
df.sort_values(by="Unidade", ascending=False)
Explanation: DataFrame
End of explanation
index = pd.Index(data=["Leite", "Ovos", "Carne", "Arroz", "Feijão"], name="Itens")
index
sq = pd.Series(index=index, data=[2, 12, 1, 5, 2]).sort_values()
sq
su = pd.Series(index=index, data=["L", "Ud", "Kg", "Kg", "Kg"]).sort_index()
su
df = pd.DataFrame({"Quantidade": sq, "Unidade": su}).sort_values(by="Unidade")
df
df["Preço p/ Ud"] = [5.00, 29.99, 6.50, 3.30, 0.50]
df["Preço Total"] = [25.00, 29.99, 13.00, 6.60, 6.00]
df
Explanation: Acessando valores
Definição das Variáveis
End of explanation
sq
sq[2]
sq[5:2:-1]
sq["Leite"]
sq["Leite":"Arroz"]
Explanation: Slicing
Series
End of explanation
df
df["Unidade"]
df.Quantidade
Uma Coluna do DataFrame é uma Series
df["Preço Total"][2]
Acesso a mais de uma Coluna
df[["Preço Total", "Quantidade"]]
acesso às Linhas: método 'loc'
df.loc["Leite"]
acesso ao item: método 'loc'
df.loc["Ovos", "Preço Total"]
acesso ao item: método 'iloc'
df.iloc[4, 3]
acesso por slice: método 'loc'
df.loc["Leite":, "Preço p/ Ud":]
acesso por slice: método 'iloc'
df.iloc[3:, 2:]
Explanation: DataFrame
End of explanation
Atribuir Valores em 'slices' levanta warnings
df["Unidade"][[0, 2]] = "Pacote"
df
Deve-se usar 'loc' ou 'iloc'
df.loc["Carne", "Unidade"] = "Kilograma"
df.iloc[3, 1] = "Litro"
df
Explanation: * Atribuição de Valores em DataFrames
End of explanation
mask => array de bool
sq > 2
mask => array de bool
df > 2
Explanation: Masks
Conceito
End of explanation
atribuição de valores em uma cópia
s_tmp = sq.copy()
s_tmp
s_tmp[s_tmp == 2]
s_tmp[s_tmp == 2] = 3
s_tmp
Explanation: Aplicação
* Series
End of explanation
atribuição de valores em uma cópia
df_tmp = df[["Preço p/ Ud", "Preço Total"]].copy()
df_tmp
mask
mask = (df_tmp > 5) & (df_tmp < 10)
mask
df_tmp[mask]
tmp2 = df_tmp.copy()
tmp2[mask] = "?"
tmp2
s_tmp[s_tmp == 2] = 3
s_tmp
Explanation: * DataFame
End of explanation
df = pd.DataFrame(
index=pd.Index(data=["Leite", "Ovos", "Carne", "Arroz", "Feijão"], name="Itens"),
columns=["Unidade", "Quantidade", "Preço Unitário"],
data=np.array([
["Litro", "Dúzia", "Kilograma", "Kilograma", "Kilograma"],
[4, 3, 1, 5, 2],
[3.00, 6.50, 25.90, 5.00, 3.80]
]).T,
)
df
verificando dtypes
df.dtypes
Conversão necessária pois o pandas interp´reta 'mixed types' como strings
df[["Quantidade", "Preço Unitário"]] = df[["Quantidade", "Preço Unitário"]].astype(float)
df
verificando dtypes
df.dtypes
Explanation: Operações Vetoriais
Definição das Variáveis
End of explanation
df["Preço Unitário"] += 1.
df
Explanation: Manipulações Numéricas
* Incrementando o Preço Unitário
End of explanation
df["Preço Unitário"] *= 0.90
df
Explanation: * Desconto de 10% no Preço Unitário
End of explanation
df["Preço Total"] = df["Preço Unitário"] * df["Quantidade"]
df
Explanation: * Cálculo do Preço Total por Item
End of explanation
df["Preço Médio Por Kg"] = np.nan
df
mask = df["Unidade"] == "Kilograma"
df[mask]
df.loc[mask, "Preço Médio Por Kg"] = (df.loc[mask, "Preço Unitário"] / df.loc[mask, "Quantidade"]).sum()
df
Explanation: * Cálculo do Preço por Kg
End of explanation
df.fillna(0)
Explanation: * Preenchendo NaNs
End of explanation
df.sum()
Explanation: * Soma : apenas valores numéricos
End of explanation
df.mean()
Explanation: * Média: apenas valores numéricos
End of explanation
df.std()
Explanation: * Desvio Padrão: apenas valores numéricos
End of explanation
df.median()
Explanation: * Mediana: apenas valores numéricos
End of explanation
df.mode()
Explanation: * Moda (valores mais frequentes): todos os tipos de valores
End of explanation
cols=["c1", "c2", "c3", "c4", "c5"]
data = np.random.rand(100, 5)
data *= np.array([ 10, 20, 30, 40, 50])
data += np.array([100, 200, 300, 400, 500])
data = np.ceil(data)
df = pd.DataFrame(columns=cols, data=data)
df.head(10)
Explanation: Análise de Dados
Definição das Variáveis
End of explanation
descrevendo as distribuições dos dados
df.describe()
mesma coisa, manipulando os percentis
df.describe(percentiles=[0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9])
Verificando os valores únicos de C3
df.c3.unique()
Verificando a frequencia dos valores únicos de C3
df.c3.value_counts()
Explanation: Descrição dos dados
End of explanation
Não altere esse valor, pois ele permite que toda a geração aleatória seja igual para todos
np.random.seed(123456789)
Explanation: Desafio 1
Objetivo:
Analisar como Outliers deformam um a distribuição normal.
Configurações
End of explanation
Dataset Original, já criado para a solução
media = 150
sigma = 10
serie = pd.Series(np.random.randn(1000)) * sigma + media
Explanation: Dataset Original
O dataset original é uma Series com 1000 elementos cujos dados pertençam a uma distribuição normal de média igual a 150 e desvio padrão 10.
Construção: a função np.random.randn é usada para gerar a distribuição normal, para depois transformá-la com a média e o sigma dados.
End of explanation
accum = pd.DataFrame(
index=range(2600),
columns=["original"],
data=serie
)
accum.head().append(accum.tail())
Explanation: O Acumulador
O acumulador é um DataFrame usado para acumular as transformações feitas em cima do dataset original. Cada transformação será armazenada em uma coluna cujo nome descreve a transformação feita sobre os dados.
Insira o dataset criado na coluna de nome original.
End of explanation
Escreva a a Solução Aqui
Explanation: Inserção de dados
Para cada item a seguir, crie um dataset de distribuição normal contendo N elementos, usando a média e o sigma também fornecidos pelo item.
Em seguida, concatene os novos elementos gerados à Series original usando o código abaixo:
series_original = series_original.append(nova_series).reset_index(drop=True)
Depois disso, insira a Series atualizada no acumulador em uma coluna com o nome de coluna fornecido em cada item.
[ A ] Elementos da mesma Distribuição
N = 300
média = 150
sigma = 10
coluna = "mesma_distribuição"
End of explanation
Escreva a a Solução Aqui
Explanation: [ B ] Elementos de outra distribuição
N = 100
média = 400
sigma = 100
coluna = "outliers_adicionados"
End of explanation
Escreva a a Solução Aqui
Explanation: [ C ] Elementos Próximos à média
N = 1000
média = 150
sigma = 0.1
coluna = "elementos_prox_a_media"
End of explanation
Escreva a a Solução Aqui
Explanation: Avaliação das Séries:
Avaliar o acumulador e verificar o que mudou na distribuição original.
End of explanation
classes = ["Leite", "Ovos", "Carne", "Arroz", "Feijão"]
labels = pd.Series(np.random.choice(classes, 100))
Explanation: Desafio 2
Objetivo:
Implementar o OneHotEncoding, método muito utilizado em Machine Learning para codificar dados categóricos na forma em que algoritmos de aprendizado de máquina compreendam.
Exemplo:
```
original = pd.Series([
"classe_1",
"classe_1",
"classe_2",
"classe_2",
"classe_1",
"classe_2",
])
---
encoded = pd.DataFrame(
columns=["classe_1", "classe_2"],
data=[
[1, 0],
[1, 0],
[0, 1],
[0, 1],
[1, 0],
[0, 1]
]
)
```
Série Original:
End of explanation
Escreva a a Solução Aqui
Explanation: Dataset Codificado:
End of explanation |
9,072 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AST 337 In-Class Lab #1
Wednesday, September 6, 2017
In this lab, you'll learn to read in and manipulate tabular data with the python package pandas and plot that data with the plotting module matplotlib.pyplot.
On the science end, you will compare H-R diagrams for two different open star clusters, a globular cluster, and a population of field (non-cluster) stars.
Step1: First off, we'll need to read in the data with the pandas function read_csv. A basic example is given below
Step2: "pleiades" is now a pandas dataframe object, which is essentially a form of python table. To see what's stored in pleiades, execute the cell below. Anywhere you see ..., that means that there are a number of additional columns or rows that have been hidden.
Step3: Perhaps more useful are the pandas .columns and .dtypes methods. Execute the cells below and then edit the descriptions of what each does in the cell below (double click on this text to get into the markdown cell, where you can type regular text)
(1) The .columns method does....
(2) The .dtypes method does....
Step4: The two columns that we care about for this lab are the Temperature (Teff) and the Luminosity (Lbol). The units for these two columns are Kelvin and Solar luminosities, respectively. As you will label these later in your plots, I'll note here that there's a special trick for getting the sun symbol in a Markdown cell using the typesetting system LaTeX.
Solar Luminosities = L$_{\odot}$
Double click on this cell to see how I made the symbol above.
Let's create a pandas series that we can manipulate from each of the columns of interest.
Step5: Note though from your .dtypes output above that both of these columns have dtype "object", which is not a data type that will allow us to manipulate them. For example, try executing the cell below, where we attempt to subtract the value 2 from each entry in the "pleiades_L" column. You should get an error...
Step6: So we want to convert the type of these pandas series to be numeric, which we do with pandas.to_numeric, as below
Step7: Now let's print the first ten elements of each array to verify that nothing weird happened during this conversion.
Step8: Now let's plot these two quantities against one another. There are many ways to plot in pyplot, but I'll use the one that I find to work most consistently and intuitively below.
Step9: Yikes, that's ugly, right? That's because the default plot symbol is a line connecting all the points. In this case what we really want is a so-called scatterplot, which we can do easily by specifying the plotting marker right after the y variable, as below. Below I use 'o', which stands for the circle symbol. For the full list of matplotlib symbols, see this link.
Step10: Note the default plotting color is blue, but you can change this easily by adding a color shorthand before the marker shorthand. Below I use 'g' for green, but here again, there are lots of options, as outlined at this link.
Step11: Colors notwithstanding, this plot is still very ugly and should not look much like an H-R diagram to you. For one thing, H-R diagrams usually have log(Luminosity) on the y-axis, which you do as follows
Step12: OK that's much nicer, but should still look backwards to you, because we always draw H-R Diagrams with the Temperature axis running from high to low temperature. This is a pretty easy fix too.
Step13: OK! This is starting to look like a good H-R diagram, but without a plot title or axis labels, it's still not very good, so let's add those.
Step14: Exercise 1
Now it's time for you to do some exploring with the rest of the data. For each of the remaining four data files in the Lab 1 directory
Step15: Exercise 2 (Multi-Panel Plots)
The individual plots for each of these clusters are nice, but really we'd like to be able to compare them side-by-side. You can do this by making either a multi-panel plot, or an overlapping plot. A skeleton outline of how to do each is below. Please use this as a framework to make similar plots with our data.
Step16: Exercise 3 (Comprehension Questions)
1) Describe the differences between the H-R diagrams. Do this both qualitatively (note differences in their appearance) and quantitatively. You might find methods like .min and .max to be useful here.
2) Which of the four groups of stars has stars that are NOT on the main sequence, and which don't? Why, do you think?
3) Why do you think the low temperature (M-dwarf) end of the main sequence cuts off at such a different place in the four samples? Do you think this is a physical effect or an instrumental one and why?
4) Why do you think there are no white dwarfs in any of the samples besides the field sample? | Python Code:
#load packages
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: AST 337 In-Class Lab #1
Wednesday, September 6, 2017
In this lab, you'll learn to read in and manipulate tabular data with the python package pandas and plot that data with the plotting module matplotlib.pyplot.
On the science end, you will compare H-R diagrams for two different open star clusters, a globular cluster, and a population of field (non-cluster) stars.
End of explanation
pleiades = pd.read_csv('pleiades.csv')
Explanation: First off, we'll need to read in the data with the pandas function read_csv. A basic example is given below:
End of explanation
pleiades
Explanation: "pleiades" is now a pandas dataframe object, which is essentially a form of python table. To see what's stored in pleiades, execute the cell below. Anywhere you see ..., that means that there are a number of additional columns or rows that have been hidden.
End of explanation
pleiades.columns
pleiades.dtypes
Explanation: Perhaps more useful are the pandas .columns and .dtypes methods. Execute the cells below and then edit the descriptions of what each does in the cell below (double click on this text to get into the markdown cell, where you can type regular text)
(1) The .columns method does....
(2) The .dtypes method does....
End of explanation
pleiades_L = pleiades["Lbol"]
pleiades_T = pleiades["Teff"]
Explanation: The two columns that we care about for this lab are the Temperature (Teff) and the Luminosity (Lbol). The units for these two columns are Kelvin and Solar luminosities, respectively. As you will label these later in your plots, I'll note here that there's a special trick for getting the sun symbol in a Markdown cell using the typesetting system LaTeX.
Solar Luminosities = L$_{\odot}$
Double click on this cell to see how I made the symbol above.
Let's create a pandas series that we can manipulate from each of the columns of interest.
End of explanation
pleiades_L = pleiades_L - 2
Explanation: Note though from your .dtypes output above that both of these columns have dtype "object", which is not a data type that will allow us to manipulate them. For example, try executing the cell below, where we attempt to subtract the value 2 from each entry in the "pleiades_L" column. You should get an error...
End of explanation
pleiades_L_new = pd.to_numeric(pleiades_L, errors='coerce')
pleiades_T_new = pd.to_numeric(pleiades_T, errors='coerce')
# With "coerce", we are telling the to_numeric function to change any invalid entries to NaNs.
Explanation: So we want to convert the type of these pandas series to be numeric, which we do with pandas.to_numeric, as below:
End of explanation
pleiades_L[0:10], pleiades_L_new[0:10], pleiades_T[0:10], pleiades_T_new[0:10]
Explanation: Now let's print the first ten elements of each array to verify that nothing weird happened during this conversion.
End of explanation
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new)
Explanation: Now let's plot these two quantities against one another. There are many ways to plot in pyplot, but I'll use the one that I find to work most consistently and intuitively below.
End of explanation
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'o')
Explanation: Yikes, that's ugly, right? That's because the default plot symbol is a line connecting all the points. In this case what we really want is a so-called scatterplot, which we can do easily by specifying the plotting marker right after the y variable, as below. Below I use 'o', which stands for the circle symbol. For the full list of matplotlib symbols, see this link.
End of explanation
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
Explanation: Note the default plotting color is blue, but you can change this easily by adding a color shorthand before the marker shorthand. Below I use 'g' for green, but here again, there are lots of options, as outlined at this link.
End of explanation
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
ax.set_yscale('log')
Explanation: Colors notwithstanding, this plot is still very ugly and should not look much like an H-R diagram to you. For one thing, H-R diagrams usually have log(Luminosity) on the y-axis, which you do as follows:
End of explanation
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
ax.set_yscale('log')
ax.set_xlim(11000,1000)
Explanation: OK that's much nicer, but should still look backwards to you, because we always draw H-R Diagrams with the Temperature axis running from high to low temperature. This is a pretty easy fix too.
End of explanation
fig,ax = plt.subplots(figsize=(7,7))
ax.plot(pleiades_T_new, pleiades_L_new, 'go')
ax.set_yscale('log')
ax.set_xlim(11000,1000)
plt.title('H-R Diagram for the Pleiades')
plt.xlabel('Temperature (in K)')
plt.ylabel('log(Luminosity (in L$_{\odot}$))')
Explanation: OK! This is starting to look like a good H-R diagram, but without a plot title or axis labels, it's still not very good, so let's add those.
End of explanation
## Add code here to read in the other three files in the Lab 1 directory, and give them descriptive variable names.
## Add code here to identify the column labels for luminosity and temperature. Be careful - columns may not have
## the same names, and be sure to check the units of the quantities.
## Convert to pandas series and data types, if necessary.
## Plot the data in this cell and the following cells for each sample.
Explanation: Exercise 1
Now it's time for you to do some exploring with the rest of the data. For each of the remaining four data files in the Lab 1 directory:
1) read in the data
2) find the columns for luminosity and temperature
(note that some of the raw data has units of log(L) or log(T), which you'll have to "undo" to get the regular units of L and T. Refer to the Homework0 Exercise #1 if you can't remember how)
3) assign the relevant columns to pandas series and convert data types to numeric as necessary
4) make a plot of the data with appropriate axis labels, ranges, a plot title, etc.
End of explanation
#some fake data
data_x = np.arange(0,100)
data_y = 3*data_x
data_y2 = data_x**2
data_y3 = data_x + 20
data_y4 = np.sqrt(data_x)
# multipanel plot example
fig,((ax1,ax2),(ax3,ax4)) = plt.subplots(2, 2, figsize=(10,10))
fig.suptitle('This is a title for my multipanel plot')
ax1.plot(data_x, data_y, 'go')
ax1.set_title('Figure 1 Title')
ax1.set_xlabel('x label')
ax1.set_ylabel('y label')
ax2.plot(data_x, data_y2, 'bo')
ax2.set_title('Figure 2 Title')
ax2.set_xlabel('x label')
ax2.set_ylabel('y label')
ax3.plot(data_x, data_y3, 'ro')
ax3.set_title('Figure 3 Title')
ax3.set_xlabel('x label')
ax3.set_ylabel('y label')
ax4.plot(data_x, data_y4, 'mo')
ax4.set_title('Figure 4 Title')
ax4.set_xlabel('x label')
ax4.set_ylabel('y label')
#overlay plot example
fig,ax = plt.subplots(figsize=(10,10))
plt.title('This is a title for my multipanel plot')
ax.plot(data_x, data_y, 'go', label='legend entry 1', alpha=0.5)
ax.plot(data_x, data_y2, 'bo', label='legend entry 2', alpha=0.5)
ax.plot(data_x, data_y3, 'ro', label='legend entry 2', alpha=0.5)
ax.plot(data_x, data_y4, 'mo', label='legend entry 2', alpha=0.5)
ax.set_title('Figure Title')
ax.set_xlabel('x label')
ax.set_ylabel('y label')
plt.legend(numpoints=1)
#TRY EXECUTING WITH AND WITHOUT THE FOLLOWING LINE. HERE AND IN THE DATA YOU'LL BE PLOTTING,
#A SUBJECTIVE DECISION MUST BE MADE ABOUT AXIS RANGES
ax.set_ylim(0,200)
## In this cell, create your own overlay plot showing the different populations. Hint: You may want to plot the
## sample with the most data points first.
## In this cell, create a multi-panel plot for the different populations.
Explanation: Exercise 2 (Multi-Panel Plots)
The individual plots for each of these clusters are nice, but really we'd like to be able to compare them side-by-side. You can do this by making either a multi-panel plot, or an overlapping plot. A skeleton outline of how to do each is below. Please use this as a framework to make similar plots with our data.
End of explanation
## Your answers to each of the four questions here.
Explanation: Exercise 3 (Comprehension Questions)
1) Describe the differences between the H-R diagrams. Do this both qualitatively (note differences in their appearance) and quantitatively. You might find methods like .min and .max to be useful here.
2) Which of the four groups of stars has stars that are NOT on the main sequence, and which don't? Why, do you think?
3) Why do you think the low temperature (M-dwarf) end of the main sequence cuts off at such a different place in the four samples? Do you think this is a physical effect or an instrumental one and why?
4) Why do you think there are no white dwarfs in any of the samples besides the field sample?
End of explanation |
9,073 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: NumPy API on TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Enabling NumPy behavior
In order to use tnp as NumPy, enable NumPy behavior for TensorFlow
Step3: This call enables type promotion in TensorFlow and also changes type inference, when converting literals to tensors, to more strictly follow the NumPy standard.
Note
Step4: Type promotion
TensorFlow NumPy APIs have well-defined semantics for converting literals to ND array, as well as for performing type promotion on ND array inputs. Please see np.result_type for more details.
TensorFlow APIs leave tf.Tensor inputs unchanged and do not perform type promotion on them, while TensorFlow NumPy APIs promote all inputs according to NumPy type promotion rules. In the next example, you will perform type promotion. First, run addition on ND array inputs of different types and note the output types. None of these type promotions would be allowed by TensorFlow APIs.
Step5: Finally, convert literals to ND array using ndarray.asarray and note the resulting type.
Step6: When converting literals to ND array, NumPy prefers wide types like tnp.int64 and tnp.float64. In contrast, tf.convert_to_tensor prefers tf.int32 and tf.float32 types for converting constants to tf.Tensor. TensorFlow NumPy APIs adhere to the NumPy behavior for integers. As for floats, the prefer_float32 argument of experimental_enable_numpy_behavior lets you control whether to prefer tf.float32 over tf.float64 (default to False). For example
Step7: Broadcasting
Similar to TensorFlow, NumPy defines rich semantics for "broadcasting" values.
You can check out the NumPy broadcasting guide for more information and compare this with TensorFlow broadcasting semantics.
Step8: Indexing
NumPy defines very sophisticated indexing rules. See the NumPy Indexing guide. Note the use of ND arrays as indices below.
Step10: Example Model
Next, you can see how to create a model and run inference on it. This simple model applies a relu layer followed by a linear projection. Later sections will show how to compute gradients for this model using TensorFlow's GradientTape.
Step11: TensorFlow NumPy and NumPy
TensorFlow NumPy implements a subset of the full NumPy spec. While more symbols will be added over time, there are systematic features that will not be supported in the near future. These include NumPy C API support, Swig integration, Fortran storage order, views and stride_tricks, and some dtypes (like np.recarray and np.object). For more details, please see the TensorFlow NumPy API Documentation.
NumPy interoperability
TensorFlow ND arrays can interoperate with NumPy functions. These objects implement the __array__ interface. NumPy uses this interface to convert function arguments to np.ndarray values before processing them.
Similarly, TensorFlow NumPy functions can accept inputs of different types including np.ndarray. These inputs are converted to an ND array by calling ndarray.asarray on them.
Conversion of the ND array to and from np.ndarray may trigger actual data copies. Please see the section on buffer copies for more details.
Step12: Buffer copies
Intermixing TensorFlow NumPy with NumPy code may trigger data copies. This is because TensorFlow NumPy has stricter requirements on memory alignment than those of NumPy.
When a np.ndarray is passed to TensorFlow NumPy, it will check for alignment requirements and trigger a copy if needed. When passing an ND array CPU buffer to NumPy, generally the buffer will satisfy alignment requirements and NumPy will not need to create a copy.
ND arrays can refer to buffers placed on devices other than the local CPU memory. In such cases, invoking a NumPy function will trigger copies across the network or device as needed.
Given this, intermixing with NumPy API calls should generally be done with caution and the user should watch out for overheads of copying data. Interleaving TensorFlow NumPy calls with TensorFlow calls is generally safe and avoids copying data. See the section on TensorFlow interoperability for more details.
Operator precedence
TensorFlow NumPy defines an __array_priority__ higher than NumPy's. This means that for operators involving both ND array and np.ndarray, the former will take precedence, i.e., np.ndarray input will get converted to an ND array and the TensorFlow NumPy implementation of the operator will get invoked.
Step13: TF NumPy and TensorFlow
TensorFlow NumPy is built on top of TensorFlow and hence interoperates seamlessly with TensorFlow.
tf.Tensor and ND array
ND array is an alias to tf.Tensor, so obviously they can be intermixed without triggering actual data copies.
Step14: TensorFlow interoperability
An ND array can be passed to TensorFlow APIs, since ND array is just an alias to tf.Tensor. As mentioned earlier, such interoperation does not do data copies, even for data placed on accelerators or remote devices.
Conversely, tf.Tensor objects can be passed to tf.experimental.numpy APIs, without performing data copies.
Step17: Gradients and Jacobians
Step18: Trace compilation
Step19: Vectorization
Step20: Device placement
TensorFlow NumPy can place operations on CPUs, GPUs, TPUs and remote devices. It uses standard TensorFlow mechanisms for device placement. Below a simple example shows how to list all devices and then place some computation on a particular device.
TensorFlow also has APIs for replicating computation across devices and performing collective reductions which will not be covered here.
List devices
tf.config.list_logical_devices and tf.config.list_physical_devices can be used to find what devices to use.
Step21: Placing operations
Step22: Copying ND arrays across devices
Step25: Performance comparisons
TensorFlow NumPy uses highly optimized TensorFlow kernels that can be dispatched on CPUs, GPUs and TPUs. TensorFlow also performs many compiler optimizations, like operation fusion, which translate to performance and memory improvements. See TensorFlow graph optimization with Grappler to learn more.
However TensorFlow has higher overheads for dispatching operations compared to NumPy. For workloads composed of small operations (less than about 10 microseconds), these overheads can dominate the runtime and NumPy could provide better performance. For other cases, TensorFlow should generally provide better performance.
Run the benchmark below to compare NumPy and TensorFlow NumPy performance for different input sizes. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow.experimental.numpy as tnp
import timeit
print("Using TensorFlow version %s" % tf.__version__)
Explanation: NumPy API on TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/tf_numpy"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/tf_numpy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/tf_numpy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/tf_numpy.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TensorFlow implements a subset of the NumPy API, available as tf.experimental.numpy. This allows running NumPy code, accelerated by TensorFlow, while also allowing access to all of TensorFlow's APIs.
Setup
End of explanation
tnp.experimental_enable_numpy_behavior()
Explanation: Enabling NumPy behavior
In order to use tnp as NumPy, enable NumPy behavior for TensorFlow:
End of explanation
# Create an ND array and check out different attributes.
ones = tnp.ones([5, 3], dtype=tnp.float32)
print("Created ND array with shape = %s, rank = %s, "
"dtype = %s on device = %s\n" % (
ones.shape, ones.ndim, ones.dtype, ones.device))
# `ndarray` is just an alias to `tf.Tensor`.
print("Is `ones` an instance of tf.Tensor: %s\n" % isinstance(ones, tf.Tensor))
# Try commonly used member functions.
print("ndarray.T has shape %s" % str(ones.T.shape))
print("narray.reshape(-1) has shape %s" % ones.reshape(-1).shape)
Explanation: This call enables type promotion in TensorFlow and also changes type inference, when converting literals to tensors, to more strictly follow the NumPy standard.
Note: This call will change the behavior of entire TensorFlow, not just the tf.experimental.numpy module.
TensorFlow NumPy ND array
An instance of tf.experimental.numpy.ndarray, called ND Array, represents a multidimensional dense array of a given dtype placed on a certain device. It is an alias to tf.Tensor. Check out the ND array class for useful methods like ndarray.T, ndarray.reshape, ndarray.ravel and others.
First create an ND array object, and then invoke different methods.
End of explanation
print("Type promotion for operations")
values = [tnp.asarray(1, dtype=d) for d in
(tnp.int32, tnp.int64, tnp.float32, tnp.float64)]
for i, v1 in enumerate(values):
for v2 in values[i + 1:]:
print("%s + %s => %s" %
(v1.dtype.name, v2.dtype.name, (v1 + v2).dtype.name))
Explanation: Type promotion
TensorFlow NumPy APIs have well-defined semantics for converting literals to ND array, as well as for performing type promotion on ND array inputs. Please see np.result_type for more details.
TensorFlow APIs leave tf.Tensor inputs unchanged and do not perform type promotion on them, while TensorFlow NumPy APIs promote all inputs according to NumPy type promotion rules. In the next example, you will perform type promotion. First, run addition on ND array inputs of different types and note the output types. None of these type promotions would be allowed by TensorFlow APIs.
End of explanation
print("Type inference during array creation")
print("tnp.asarray(1).dtype == tnp.%s" % tnp.asarray(1).dtype.name)
print("tnp.asarray(1.).dtype == tnp.%s\n" % tnp.asarray(1.).dtype.name)
Explanation: Finally, convert literals to ND array using ndarray.asarray and note the resulting type.
End of explanation
tnp.experimental_enable_numpy_behavior(prefer_float32=True)
print("When prefer_float32 is True:")
print("tnp.asarray(1.).dtype == tnp.%s" % tnp.asarray(1.).dtype.name)
print("tnp.add(1., 2.).dtype == tnp.%s" % tnp.add(1., 2.).dtype.name)
tnp.experimental_enable_numpy_behavior(prefer_float32=False)
print("When prefer_float32 is False:")
print("tnp.asarray(1.).dtype == tnp.%s" % tnp.asarray(1.).dtype.name)
print("tnp.add(1., 2.).dtype == tnp.%s" % tnp.add(1., 2.).dtype.name)
Explanation: When converting literals to ND array, NumPy prefers wide types like tnp.int64 and tnp.float64. In contrast, tf.convert_to_tensor prefers tf.int32 and tf.float32 types for converting constants to tf.Tensor. TensorFlow NumPy APIs adhere to the NumPy behavior for integers. As for floats, the prefer_float32 argument of experimental_enable_numpy_behavior lets you control whether to prefer tf.float32 over tf.float64 (default to False). For example:
End of explanation
x = tnp.ones([2, 3])
y = tnp.ones([3])
z = tnp.ones([1, 2, 1])
print("Broadcasting shapes %s, %s and %s gives shape %s" % (
x.shape, y.shape, z.shape, (x + y + z).shape))
Explanation: Broadcasting
Similar to TensorFlow, NumPy defines rich semantics for "broadcasting" values.
You can check out the NumPy broadcasting guide for more information and compare this with TensorFlow broadcasting semantics.
End of explanation
x = tnp.arange(24).reshape(2, 3, 4)
print("Basic indexing")
print(x[1, tnp.newaxis, 1:3, ...], "\n")
print("Boolean indexing")
print(x[:, (True, False, True)], "\n")
print("Advanced indexing")
print(x[1, (0, 0, 1), tnp.asarray([0, 1, 1])])
# Mutation is currently not supported
try:
tnp.arange(6)[1] = -1
except TypeError:
print("Currently, TensorFlow NumPy does not support mutation.")
Explanation: Indexing
NumPy defines very sophisticated indexing rules. See the NumPy Indexing guide. Note the use of ND arrays as indices below.
End of explanation
class Model(object):
Model with a dense and a linear layer.
def __init__(self):
self.weights = None
def predict(self, inputs):
if self.weights is None:
size = inputs.shape[1]
# Note that type `tnp.float32` is used for performance.
stddev = tnp.sqrt(size).astype(tnp.float32)
w1 = tnp.random.randn(size, 64).astype(tnp.float32) / stddev
bias = tnp.random.randn(64).astype(tnp.float32)
w2 = tnp.random.randn(64, 2).astype(tnp.float32) / 8
self.weights = (w1, bias, w2)
else:
w1, bias, w2 = self.weights
y = tnp.matmul(inputs, w1) + bias
y = tnp.maximum(y, 0) # Relu
return tnp.matmul(y, w2) # Linear projection
model = Model()
# Create input data and compute predictions.
print(model.predict(tnp.ones([2, 32], dtype=tnp.float32)))
Explanation: Example Model
Next, you can see how to create a model and run inference on it. This simple model applies a relu layer followed by a linear projection. Later sections will show how to compute gradients for this model using TensorFlow's GradientTape.
End of explanation
# ND array passed into NumPy function.
np_sum = np.sum(tnp.ones([2, 3]))
print("sum = %s. Class: %s" % (float(np_sum), np_sum.__class__))
# `np.ndarray` passed into TensorFlow NumPy function.
tnp_sum = tnp.sum(np.ones([2, 3]))
print("sum = %s. Class: %s" % (float(tnp_sum), tnp_sum.__class__))
# It is easy to plot ND arrays, given the __array__ interface.
labels = 15 + 2 * tnp.random.randn(1, 1000)
_ = plt.hist(labels)
Explanation: TensorFlow NumPy and NumPy
TensorFlow NumPy implements a subset of the full NumPy spec. While more symbols will be added over time, there are systematic features that will not be supported in the near future. These include NumPy C API support, Swig integration, Fortran storage order, views and stride_tricks, and some dtypes (like np.recarray and np.object). For more details, please see the TensorFlow NumPy API Documentation.
NumPy interoperability
TensorFlow ND arrays can interoperate with NumPy functions. These objects implement the __array__ interface. NumPy uses this interface to convert function arguments to np.ndarray values before processing them.
Similarly, TensorFlow NumPy functions can accept inputs of different types including np.ndarray. These inputs are converted to an ND array by calling ndarray.asarray on them.
Conversion of the ND array to and from np.ndarray may trigger actual data copies. Please see the section on buffer copies for more details.
End of explanation
x = tnp.ones([2]) + np.ones([2])
print("x = %s\nclass = %s" % (x, x.__class__))
Explanation: Buffer copies
Intermixing TensorFlow NumPy with NumPy code may trigger data copies. This is because TensorFlow NumPy has stricter requirements on memory alignment than those of NumPy.
When a np.ndarray is passed to TensorFlow NumPy, it will check for alignment requirements and trigger a copy if needed. When passing an ND array CPU buffer to NumPy, generally the buffer will satisfy alignment requirements and NumPy will not need to create a copy.
ND arrays can refer to buffers placed on devices other than the local CPU memory. In such cases, invoking a NumPy function will trigger copies across the network or device as needed.
Given this, intermixing with NumPy API calls should generally be done with caution and the user should watch out for overheads of copying data. Interleaving TensorFlow NumPy calls with TensorFlow calls is generally safe and avoids copying data. See the section on TensorFlow interoperability for more details.
Operator precedence
TensorFlow NumPy defines an __array_priority__ higher than NumPy's. This means that for operators involving both ND array and np.ndarray, the former will take precedence, i.e., np.ndarray input will get converted to an ND array and the TensorFlow NumPy implementation of the operator will get invoked.
End of explanation
x = tf.constant([1, 2])
print(x)
# `asarray` and `convert_to_tensor` here are no-ops.
tnp_x = tnp.asarray(x)
print(tnp_x)
print(tf.convert_to_tensor(tnp_x))
# Note that tf.Tensor.numpy() will continue to return `np.ndarray`.
print(x.numpy(), x.numpy().__class__)
Explanation: TF NumPy and TensorFlow
TensorFlow NumPy is built on top of TensorFlow and hence interoperates seamlessly with TensorFlow.
tf.Tensor and ND array
ND array is an alias to tf.Tensor, so obviously they can be intermixed without triggering actual data copies.
End of explanation
# ND array passed into TensorFlow function.
tf_sum = tf.reduce_sum(tnp.ones([2, 3], tnp.float32))
print("Output = %s" % tf_sum)
# `tf.Tensor` passed into TensorFlow NumPy function.
tnp_sum = tnp.sum(tf.ones([2, 3]))
print("Output = %s" % tnp_sum)
Explanation: TensorFlow interoperability
An ND array can be passed to TensorFlow APIs, since ND array is just an alias to tf.Tensor. As mentioned earlier, such interoperation does not do data copies, even for data placed on accelerators or remote devices.
Conversely, tf.Tensor objects can be passed to tf.experimental.numpy APIs, without performing data copies.
End of explanation
def create_batch(batch_size=32):
Creates a batch of input and labels.
return (tnp.random.randn(batch_size, 32).astype(tnp.float32),
tnp.random.randn(batch_size, 2).astype(tnp.float32))
def compute_gradients(model, inputs, labels):
Computes gradients of squared loss between model prediction and labels.
with tf.GradientTape() as tape:
assert model.weights is not None
# Note that `model.weights` need to be explicitly watched since they
# are not tf.Variables.
tape.watch(model.weights)
# Compute prediction and loss
prediction = model.predict(inputs)
loss = tnp.sum(tnp.square(prediction - labels))
# This call computes the gradient through the computation above.
return tape.gradient(loss, model.weights)
inputs, labels = create_batch()
gradients = compute_gradients(model, inputs, labels)
# Inspect the shapes of returned gradients to verify they match the
# parameter shapes.
print("Parameter shapes:", [w.shape for w in model.weights])
print("Gradient shapes:", [g.shape for g in gradients])
# Verify that gradients are of type ND array.
assert isinstance(gradients[0], tnp.ndarray)
# Computes a batch of jacobians. Each row is the jacobian of an element in the
# batch of outputs w.r.t. the corresponding input batch element.
def prediction_batch_jacobian(inputs):
with tf.GradientTape() as tape:
tape.watch(inputs)
prediction = model.predict(inputs)
return prediction, tape.batch_jacobian(prediction, inputs)
inp_batch = tnp.ones([16, 32], tnp.float32)
output, batch_jacobian = prediction_batch_jacobian(inp_batch)
# Note how the batch jacobian shape relates to the input and output shapes.
print("Output shape: %s, input shape: %s" % (output.shape, inp_batch.shape))
print("Batch jacobian shape:", batch_jacobian.shape)
Explanation: Gradients and Jacobians: tf.GradientTape
TensorFlow's GradientTape can be used for backpropagation through TensorFlow and TensorFlow NumPy code.
Use the model created in Example Model section, and compute gradients and jacobians.
End of explanation
inputs, labels = create_batch(512)
print("Eager performance")
compute_gradients(model, inputs, labels)
print(timeit.timeit(lambda: compute_gradients(model, inputs, labels),
number=10) * 100, "ms")
print("\ntf.function compiled performance")
compiled_compute_gradients = tf.function(compute_gradients)
compiled_compute_gradients(model, inputs, labels) # warmup
print(timeit.timeit(lambda: compiled_compute_gradients(model, inputs, labels),
number=10) * 100, "ms")
Explanation: Trace compilation: tf.function
TensorFlow's tf.function works by "trace compiling" the code and then optimizing these traces for much faster performance. See the Introduction to Graphs and Functions.
tf.function can be used to optimize TensorFlow NumPy code as well. Here is a simple example to demonstrate the speedups. Note that the body of tf.function code includes calls to TensorFlow NumPy APIs.
End of explanation
@tf.function
def vectorized_per_example_gradients(inputs, labels):
def single_example_gradient(arg):
inp, label = arg
return compute_gradients(model,
tnp.expand_dims(inp, 0),
tnp.expand_dims(label, 0))
# Note that a call to `tf.vectorized_map` semantically maps
# `single_example_gradient` over each row of `inputs` and `labels`.
# The interface is similar to `tf.map_fn`.
# The underlying machinery vectorizes away this map loop which gives
# nice speedups.
return tf.vectorized_map(single_example_gradient, (inputs, labels))
batch_size = 128
inputs, labels = create_batch(batch_size)
per_example_gradients = vectorized_per_example_gradients(inputs, labels)
for w, p in zip(model.weights, per_example_gradients):
print("Weight shape: %s, batch size: %s, per example gradient shape: %s " % (
w.shape, batch_size, p.shape))
# Benchmark the vectorized computation above and compare with
# unvectorized sequential computation using `tf.map_fn`.
@tf.function
def unvectorized_per_example_gradients(inputs, labels):
def single_example_gradient(arg):
inp, label = arg
return compute_gradients(model,
tnp.expand_dims(inp, 0),
tnp.expand_dims(label, 0))
return tf.map_fn(single_example_gradient, (inputs, labels),
fn_output_signature=(tf.float32, tf.float32, tf.float32))
print("Running vectorized computation")
print(timeit.timeit(lambda: vectorized_per_example_gradients(inputs, labels),
number=10) * 100, "ms")
print("\nRunning unvectorized computation")
per_example_gradients = unvectorized_per_example_gradients(inputs, labels)
print(timeit.timeit(lambda: unvectorized_per_example_gradients(inputs, labels),
number=10) * 100, "ms")
Explanation: Vectorization: tf.vectorized_map
TensorFlow has inbuilt support for vectorizing parallel loops, which allows speedups of one to two orders of magnitude. These speedups are accessible via the tf.vectorized_map API and apply to TensorFlow NumPy code as well.
It is sometimes useful to compute the gradient of each output in a batch w.r.t. the corresponding input batch element. Such computation can be done efficiently using tf.vectorized_map as shown below.
End of explanation
print("All logical devices:", tf.config.list_logical_devices())
print("All physical devices:", tf.config.list_physical_devices())
# Try to get the GPU device. If unavailable, fallback to CPU.
try:
device = tf.config.list_logical_devices(device_type="GPU")[0]
except IndexError:
device = "/device:CPU:0"
Explanation: Device placement
TensorFlow NumPy can place operations on CPUs, GPUs, TPUs and remote devices. It uses standard TensorFlow mechanisms for device placement. Below a simple example shows how to list all devices and then place some computation on a particular device.
TensorFlow also has APIs for replicating computation across devices and performing collective reductions which will not be covered here.
List devices
tf.config.list_logical_devices and tf.config.list_physical_devices can be used to find what devices to use.
End of explanation
print("Using device: %s" % str(device))
# Run operations in the `tf.device` scope.
# If a GPU is available, these operations execute on the GPU and outputs are
# placed on the GPU memory.
with tf.device(device):
prediction = model.predict(create_batch(5)[0])
print("prediction is placed on %s" % prediction.device)
Explanation: Placing operations: tf.device
Operations can be placed on a device by calling it in a tf.device scope.
End of explanation
with tf.device("/device:CPU:0"):
prediction_cpu = tnp.copy(prediction)
print(prediction.device)
print(prediction_cpu.device)
Explanation: Copying ND arrays across devices: tnp.copy
A call to tnp.copy, placed in a certain device scope, will copy the data to that device, unless the data is already on that device.
End of explanation
def benchmark(f, inputs, number=30, force_gpu_sync=False):
Utility to benchmark `f` on each value in `inputs`.
times = []
for inp in inputs:
def _g():
if force_gpu_sync:
one = tnp.asarray(1)
f(inp)
if force_gpu_sync:
with tf.device("CPU:0"):
tnp.copy(one) # Force a sync for GPU case
_g() # warmup
t = timeit.timeit(_g, number=number)
times.append(t * 1000. / number)
return times
def plot(np_times, tnp_times, compiled_tnp_times, has_gpu, tnp_times_gpu):
Plot the different runtimes.
plt.xlabel("size")
plt.ylabel("time (ms)")
plt.title("Sigmoid benchmark: TF NumPy vs NumPy")
plt.plot(sizes, np_times, label="NumPy")
plt.plot(sizes, tnp_times, label="TF NumPy (CPU)")
plt.plot(sizes, compiled_tnp_times, label="Compiled TF NumPy (CPU)")
if has_gpu:
plt.plot(sizes, tnp_times_gpu, label="TF NumPy (GPU)")
plt.legend()
# Define a simple implementation of `sigmoid`, and benchmark it using
# NumPy and TensorFlow NumPy for different input sizes.
def np_sigmoid(y):
return 1. / (1. + np.exp(-y))
def tnp_sigmoid(y):
return 1. / (1. + tnp.exp(-y))
@tf.function
def compiled_tnp_sigmoid(y):
return tnp_sigmoid(y)
sizes = (2 ** 0, 2 ** 5, 2 ** 10, 2 ** 15, 2 ** 20)
np_inputs = [np.random.randn(size).astype(np.float32) for size in sizes]
np_times = benchmark(np_sigmoid, np_inputs)
with tf.device("/device:CPU:0"):
tnp_inputs = [tnp.random.randn(size).astype(np.float32) for size in sizes]
tnp_times = benchmark(tnp_sigmoid, tnp_inputs)
compiled_tnp_times = benchmark(compiled_tnp_sigmoid, tnp_inputs)
has_gpu = len(tf.config.list_logical_devices("GPU"))
if has_gpu:
with tf.device("/device:GPU:0"):
tnp_inputs = [tnp.random.randn(size).astype(np.float32) for size in sizes]
tnp_times_gpu = benchmark(compiled_tnp_sigmoid, tnp_inputs, 100, True)
else:
tnp_times_gpu = None
plot(np_times, tnp_times, compiled_tnp_times, has_gpu, tnp_times_gpu)
Explanation: Performance comparisons
TensorFlow NumPy uses highly optimized TensorFlow kernels that can be dispatched on CPUs, GPUs and TPUs. TensorFlow also performs many compiler optimizations, like operation fusion, which translate to performance and memory improvements. See TensorFlow graph optimization with Grappler to learn more.
However TensorFlow has higher overheads for dispatching operations compared to NumPy. For workloads composed of small operations (less than about 10 microseconds), these overheads can dominate the runtime and NumPy could provide better performance. For other cases, TensorFlow should generally provide better performance.
Run the benchmark below to compare NumPy and TensorFlow NumPy performance for different input sizes.
End of explanation |
9,074 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
I started here
Step2: cf. 3.2 Datasets, 3.2.1 MNIST Dataset
Step3: GPU note
Using the GPU
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python myscriptIwanttorunonthegpu.py
From theano's documentation, "Using the GPU", "Only computations with float32 data-type can be accelerated. Better support for float64 is expected in upcoming hardware, but float64 computations are still relatively slow (Jan 2010)." Hence floatX=float32.
I ran the script logistic_sgd.py locally, that's found in DeepLearningTutorials from lisa-lab's github | Python Code:
import theano
import theano.tensor as T
# cf. https://github.com/lisa-lab/DeepLearningTutorials/blob/c4db2098e6620a0ac393f291ec4dc524375e96fd/code/logistic_sgd.py
Explanation: I started here: Deep Learning tutorial
End of explanation
import cPickle, gzip, numpy
import os
os.getcwd()
os.listdir( os.getcwd() )
f = gzip.open('./Data/mnist.pkl.gz')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
type(train_set), type(valid_set), type(test_set)
type(train_set[0]), type(train_set[1])
def shared_dataset(data_xy):
Function that loads the dataset into shared variables
The reason we store our dataset in shared variables is to allow
Theano to copy it into the GPU memory (when code is run on GPU).
Since copying data into the GPU is slow, copying a minibatch everytime
is needed (the default behavior if the data is not in a shared
variable) would lead to a large decrease in performance.
data_x, data_y = data_xy
shared_x = theano.shared(numpy.asarray(data_x, dtype=theano.config.floatX))
shared_y = theano.shared(numpy.asarray(data_y, dtype=theano.config.floatX))
# When storing data on the GPU it has to be stored as floats
# therefore we will store the labels as ``floatX`` as well
# (``shared_y`` does exactly that). But during our computations
# we need them as ints (we use labels as index, and if they are
# floats it doesn't make sense) therefore instead of returning
# ``shared_y`` we will ahve to cast it to int. This little hack
# lets us get around this issue
return shared_x, T.cast(shared_y, 'int32')
test_set_x, test_set_y = shared_dataset(test_set)
valid_set_x, valid_set_y = shared_dataset(valid_set)
train_set_x, train_set_y = shared_dataset(train_set)
batch_size = 500 # size of the minibatch
# accessing the third minibatch of the training set
data = train_set_x[2 * batch_size: 3 * batch_size]
label = train_set_y[2 * batch_size: 3 * batch_size]
dir(train_set_x)
Explanation: cf. 3.2 Datasets, 3.2.1 MNIST Dataset
End of explanation
os.listdir("../DeepLearningTutorials/code")
import subprocess
subprocess.call(['python','../DeepLearningTutorials/code/logistic_sgd.py'])
subprocess.call(['THEANO_FLAGS=device=gpu,floatX=float32 python',
'../DeepLearningTutorials/code/logistic_sgd.py'])
execfile('../DeepLearningTutorials/code/logistic_sgd_b.py')
os.listdir( '../' )
import sklearn
Explanation: GPU note
Using the GPU
THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python myscriptIwanttorunonthegpu.py
From theano's documentation, "Using the GPU", "Only computations with float32 data-type can be accelerated. Better support for float64 is expected in upcoming hardware, but float64 computations are still relatively slow (Jan 2010)." Hence floatX=float32.
I ran the script logistic_sgd.py locally, that's found in DeepLearningTutorials from lisa-lab's github
End of explanation |
9,075 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vector space tutorial
The goal of this tutorial is to show how word co-occurrence statistics can be used to build their vectors, such that words that are similar in meaning are also close in a vectorspace.
Getting started
This is a text cell.
Step2: Task
Step3: A toy example
To demonstrate the idea, we try to cluster few words by their meaning. The words are boy, man, car, brother, uncle, son, father, dad, grandfather, cousin, parent, boss, owner, staff, adult, manager, director, person, kid, girl, woman, doll, sister, aunt, daughter, mother, mom, grandmother, idea, concept, notion, blue and pink.
Task How would you group this words? Are there words that share same theme?
Step4: See some of the co-occrrence statistics
Step5: this says us that idea was seen with time 258 times in the corpus I've used.
Distances between 'words'
Step6: Task
Step7: Projecting word vectors from 2000 dimensions to 2
We are going to use scikit-learn's Manifold learning implementation.
Step8: Now we have word vector embedding to a low dimensional space!
Step9: Task Do the cluster you see align with your grouping of words?
A bigger example
Step11: Just an example to see what we've got there. | Python Code:
# This is a code cell. It can be executed by pressing CTRL+Enter
print('Hello')
Explanation: Vector space tutorial
The goal of this tutorial is to show how word co-occurrence statistics can be used to build their vectors, such that words that are similar in meaning are also close in a vectorspace.
Getting started
This is a text cell.
End of explanation
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import pandas
pandas.options.display.max_columns = 11
pandas.options.display.max_rows = 5
import matplotlib
matplotlib.rcParams['font.size'] = 15
matplotlib.rcParams['figure.figsize'] = 15, 9
matplotlib.rcParams['savefig.dpi'] = 227
from random import sample
from urllib.request import urlretrieve
import pandas as pd
import seaborn as sns
import numpy as np
def get_space(url, key='space'):
Download the co-occurrence data.
frame_file, _ = urlretrieve(url)
return pd.read_hdf(frame_file, key=key)
Explanation: Task: modify the cell above so it greets you, in my case the cell output should be Hi, Dima.
Setting up the envinroment
We need couple of things before getting started
End of explanation
# Load the space into the memory
toy_space = get_space(
'http://www.eecs.qmul.ac.uk/~dm303/static/eecs_open14/space_frame_eecs14.h5'
)
Explanation: A toy example
To demonstrate the idea, we try to cluster few words by their meaning. The words are boy, man, car, brother, uncle, son, father, dad, grandfather, cousin, parent, boss, owner, staff, adult, manager, director, person, kid, girl, woman, doll, sister, aunt, daughter, mother, mom, grandmother, idea, concept, notion, blue and pink.
Task How would you group this words? Are there words that share same theme?
End of explanation
# So far we are interested in just these words
interesting_words = ['idea', 'notion', 'boy', 'girl']
# Query the vector space for the words of interest
toy_space.loc[interesting_words]
Explanation: See some of the co-occrrence statistics
End of explanation
# We are going to use pairwise_distances function from the sklearn package
from sklearn.metrics.pairwise import pairwise_distances
# Compute distances for the words of interest
distances = pairwise_distances(
toy_space.loc[interesting_words].values,
metric='cosine',
)
# Show the result
np.round(
pd.DataFrame(distances, index=interesting_words, columns=interesting_words),
3,
)
Explanation: this says us that idea was seen with time 258 times in the corpus I've used.
Distances between 'words'
End of explanation
# np.exp(-distances) is a fancy way of converting distances to similarities
pd.DataFrame(np.exp(-distances), index=interesting_words, columns=interesting_words)
Explanation: Task: change metric='cosine' to metric='euclidean'. How will distances change? Why is cosine distance preferred to Euclidean?
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise.pairwise_distances.html
Word similarity
Similarity 1 means that items are identical, 0 means that they are different. It's possible to convert distances to similarities, we use np.exp(-distances) here.
End of explanation
from sklearn import manifold
from sklearn.preprocessing import MinMaxScaler
# clf will be able to "project" word vectors to 2 dimensions
clf = manifold.MDS(n_components=2, dissimilarity='precomputed')
# in X we store the projection results
X = MinMaxScaler().fit_transform( # Normalize the values between 0 and 1 so it's easier to plot.
clf.fit_transform(pairwise_distances(toy_space.values, metric='cosine'))
)
Explanation: Projecting word vectors from 2000 dimensions to 2
We are going to use scikit-learn's Manifold learning implementation.
End of explanation
pd.DataFrame(X, index=toy_space.index)
import pylab as pl
pl.figure()
for word, (x, y) in zip(toy_space.index, X):
pl.text(x, y, word)
pl.tight_layout()
Explanation: Now we have word vector embedding to a low dimensional space!
End of explanation
space = get_space(
'http://www.eecs.qmul.ac.uk/~dm303/static/data/bigo_matrix.h5.gz'
)
Explanation: Task Do the cluster you see align with your grouping of words?
A bigger example
End of explanation
space.loc[
['John', 'Mary', 'girl', 'boy'],
['tree', 'car', 'face', 'England', 'France']
]
def plot(space, words, file_name=None):
Plot the `words` from the given `space`.
cooc = space.loc[words]
missing_words = list(cooc[cooc.isnull().all(axis=1)].index)
assert not missing_words, '{0} are not in the space'.format(missing_words)
distances = pairwise_distances(cooc, metric='cosine')
clf = manifold.MDS(n_components=2, dissimilarity='precomputed', n_jobs=2)
X = MinMaxScaler().fit_transform(
clf.fit_transform(distances)
)
for word, (x, y) in zip(words, X):
pl.text(x, y, word)
pl.tight_layout()
if file_name is not None:
pl.savefig(file_name)
matplotlib.rcParams['font.size'] = 20
x= plot(
space,
(
'red orange pink green blue white yellow black '
'mother father son daughter aunt uncle '
'concept research theory '
'car bus tube road bicycle train '
'karate fight fencing '
'apple company fruit train set '
''.split()
)
)
Explanation: Just an example to see what we've got there.
End of explanation |
9,076 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building your Deep Neural Network
Step2: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
<table style="width | Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v3 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = None
parameters['b' + str(l)] = None
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = None
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = None
A, activation_cache = None
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = None
A, activation_cache = None
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = None
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = None
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = None
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
Explanation: <table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = None
db = None
dA_prev = None
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = None
dA_prev, dW, db = None
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = None
dA_prev, dW, db = None
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = None
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = None
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = None
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = None
dA_prev_temp, dW_temp, db_temp = None
grads["dA" + str(l + 1)] = None
grads["dW" + str(l + 1)] = None
grads["db" + str(l + 1)] = None
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation |
9,077 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exercises
Step1: Exercise 1
Step2: b. Spearman Rank Correlation
Find the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference in rank of the ith pair of x and y values.
Step3: Check your results against scipy's Spearman rank function. stats.spearmanr
Step4: Exercise 2
Step5: b. Non-Monotonic Relationships
First, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.
Step7: Exercise 3
Step8: b. Rolling Spearman Rank Correlation
Repeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns.
What we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources
Step9: b. Rolling Spearman Rank Correlation
Plot out the rolling correlation as a time series, and compute the mean and standard deviation. | Python Code:
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import math
Explanation: Exercises: Spearman Rank Correlation
Lecture Link
This exercise notebook refers to this lecture. Please use the lecture for explanations and sample code.
https://www.quantopian.com/lectures#Spearman-Rank-Correlation
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
github.com/quantopian/research_public
End of explanation
n = 100
x = np.linspace(1, n, n)
y = x**5
#Your code goes here
corr = np.corrcoef(x, y)[1][0]
print corr
plt.plot(x, y);
Explanation: Exercise 1: Finding Correlations of Non-Linear Relationships
a. Traditional (Pearson) Correlation
Find the correlation coefficient for the relationship between x and y.
End of explanation
#Your code goes here
xrank = stats.rankdata(x, method='average')
yrank = stats.rankdata(y, method='average')
diffs = xrank - yrank
spr_corr = 1 - 6*np.sum( diffs*diffs )/( n*( n**2 - 1 ) )
print "Because the ranks of the two data sets are perfectly correlated,\
the relationship between x and y has a Spearman rank correlation coefficient of", spr_corr
Explanation: b. Spearman Rank Correlation
Find the Spearman rank correlation coefficient for the relationship between x and y using the stats.rankdata function and the formula
$$r_S = 1 - \frac{6 \sum_{i=1}^n d_i^2}{n(n^2 - 1)}$$
where $d_i$ is the difference in rank of the ith pair of x and y values.
End of explanation
# Your code goes here
stats.spearmanr(x, y)
Explanation: Check your results against scipy's Spearman rank function. stats.spearmanr
End of explanation
n = 100
a = np.random.normal(0, 1, n)
#Your code goes here
b = [0] + list(a[:(n-1)])
results = stats.spearmanr(a, b)
print "Despite the underlying relationship being a perfect correlation,\
the one-step lag led to a Spearman rank correlation coefficient of\n", results.correlation, \
", meaning the test failed to detect the strong relationship."
Explanation: Exercise 2: Limitations of Spearman Rank Correlation
a. Lagged Relationships
First, create a series b that is identical to a but lagged one step (b[i] = a[i-1]). Then, find the Spearman rank correlation coefficient of the relationship between a and b.
End of explanation
n = 100
c = np.random.normal(0, 2, n)
#Your code goes here
d = 10*c**2 - c + 2
results = stats.spearmanr(c, d)
print "Despite an exact underlying relationship of d = 10c^2 - c + 2,\
the non-monotonic nature of the relationship led to a Spearman rank Correlation coefficient of", \
results.correlation, ", meaning the test failed to detect the relationship."
plt.scatter(c, d);
Explanation: b. Non-Monotonic Relationships
First, create a series d using the relationship $d=10c^2 - c + 2$. Then, find the Spearman rank rorrelation coefficient of the relationship between c and d.
End of explanation
#Pipeline Setup
from quantopian.research import run_pipeline
from quantopian.pipeline import Pipeline
from quantopian.pipeline.data.builtin import USEquityPricing
from quantopian.pipeline.factors import CustomFactor, Returns, RollingLinearRegressionOfReturns
from quantopian.pipeline.classifiers.morningstar import Sector
from quantopian.pipeline.filters import QTradableStocksUS
from time import time
#MyFactor is our custom factor, based off of asset price momentum
class MyFactor(CustomFactor):
Momentum factor
inputs = [USEquityPricing.close]
window_length = 60
def compute(self, today, assets, out, close):
out[:] = close[-1]/close[0]
universe = QTradableStocksUS()
pipe = Pipeline(
columns = {
'MyFactor' : MyFactor(mask=universe),
},
screen=universe
)
start_timer = time()
results = run_pipeline(pipe, '2015-01-01', '2015-06-01')
end_timer = time()
results.fillna(value=0);
print "Time to run pipeline %.2f secs" % (end_timer - start_timer)
my_factor = results['MyFactor']
n = len(my_factor)
asset_list = results.index.levels[1].unique()
prices_df = get_pricing(asset_list, start_date='2015-01-01', end_date='2016-01-01', fields='price')
# Compute 10-day forward returns, then shift the dataframe back by 10
forward_returns_df = prices_df.pct_change(10).shift(-10)
# The first trading day is actually 2015-1-2
single_day_factor_values = my_factor['2015-1-2']
# Because prices are indexed over the total time period, while the factor values dataframe
# has a dynamic universe that excludes hard to trade stocks, each day there may be assets in
# the returns dataframe that are not present in the factor values dataframe. We have to filter down
# as a result.
single_day_forward_returns = forward_returns_df.loc['2015-1-2'][single_day_factor_values.index]
#Your code goes here
r = stats.spearmanr(single_day_factor_values,
single_day_forward_returns)
print "A Spearman rank rorrelation test yielded a coefficient of %s" %(r.correlation)
Explanation: Exercise 3: Real World Example
a. Factor and Forward Returns
Here we'll define a simple momentum factor (model). To evaluate it we'd need to look at how its predictions correlate with future returns over many days. We'll start by just evaluating the Spearman rank correlation between our factor values and forward returns on just one day.
Compute the Spearman rank correlation between factor values and 10 trading day forward returns on 2015-1-2.
For help on the pipeline API, see this tutorial: https://www.quantopian.com/tutorials/pipeline
End of explanation
rolling_corr = pd.Series(index=None, data=None)
#Your code goes here
for dt in prices_df.index[:60]:
# The first trading day is actually 2015-1-2
single_day_factor_values = my_factor[dt]
# Because prices are indexed over the total time period, while the factor values dataframe
# has a dynamic universe that excludes hard to trade stocks, each day there may be assets in
# the returns dataframe that are not present in the factor values dataframe. We have to filter down
# as a result.
single_day_forward_returns = forward_returns_df.loc[dt][single_day_factor_values.index]
rolling_corr[dt] = stats.spearmanr(single_day_factor_values,
single_day_forward_returns).correlation
Explanation: b. Rolling Spearman Rank Correlation
Repeat the above correlation for the first 60 days in the dataframe as opposed to just a single day. You should get a time series of Spearman rank correlations. From this we can start getting a better sense of how the factor correlates with forward returns.
What we're driving towards is known as an information coefficient. This is a very common way of measuring how predictive a model is. All of this plus much more is automated in our open source alphalens library. In order to see alphalens in action you can check out these resources:
A basic tutorial:
https://www.quantopian.com/tutorials/getting-started#lesson4
An in-depth lecture:
https://www.quantopian.com/lectures/factor-analysis
End of explanation
# Your code goes here
print 'Spearman rank correlation mean: %s' %(np.mean(rolling_corr))
print 'Spearman rank correlation std: %s' %(np.std(rolling_corr))
plt.plot(rolling_corr);
Explanation: b. Rolling Spearman Rank Correlation
Plot out the rolling correlation as a time series, and compute the mean and standard deviation.
End of explanation |
9,078 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
5. Impulse response functions
Impulse response functions (IRFs) are a standard tool for analyzing the short run dynamics of dynamic macroeconomic models, such as the Solow growth model, in response to an exogenous shock. The solow.impulse_response.ImpulseResponse class has several attributes and methods for generating and analyzing impulse response functions.
Step1: The solow.Model class provides access to all of the functionality of the solow.impulse_response.ImpulseResponse class through its irf attribute.
Step2: Example
Step3: Take a look at the IRF for the savings rate shock. Note that while capital and output are unaffected at the t=0, both consumption and investment jump (in opposite directions!) in response to the change in the savings rate.
Step4: Example
Step5: Example
Step7: Example | Python Code:
# use tab completion to see the available attributes and methods...
solowpy.impulse_response.ImpulseResponse.
Explanation: 5. Impulse response functions
Impulse response functions (IRFs) are a standard tool for analyzing the short run dynamics of dynamic macroeconomic models, such as the Solow growth model, in response to an exogenous shock. The solow.impulse_response.ImpulseResponse class has several attributes and methods for generating and analyzing impulse response functions.
End of explanation
# use tab completion to see the available attributes and methods...
ces_model.irf.
Explanation: The solow.Model class provides access to all of the functionality of the solow.impulse_response.ImpulseResponse class through its irf attribute.
End of explanation
# 100% increase in the current savings rate...
ces_model.irf.impulse = {'s': 2.0 * ces_model.params['s']}
# in efficiency units...
ces_model.irf.kind = 'efficiency_units'
Explanation: Example: Impact of a change in the savings rate
One can analyze the impact of a doubling of the savings rate on model variables as follows.
End of explanation
# ordering of variables is t, k, y, c, i!
print(ces_model.irf.impulse_response[:25,])
Explanation: Take a look at the IRF for the savings rate shock. Note that while capital and output are unaffected at the t=0, both consumption and investment jump (in opposite directions!) in response to the change in the savings rate.
End of explanation
# check the docstring to see the call signature
ces_model.irf.plot_impulse_response?
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.irf.plot_impulse_response(ax, variable='output')
plt.show()
Explanation: Example: Plotting an impulse response function
One can use a convenience method to to plot the impulse response functions for a particular variable.
End of explanation
# more complicate shocks are possible
ces_model.irf.impulse = {'s': 0.9 * ces_model.params['s'], 'g': 1.05 * ces_model.params['g']}
# in efficiency units...
ces_model.irf.kind = 'per_capita'
fig, ax = plt.subplots(1, 1, figsize=(8,6))
ces_model.irf.plot_impulse_response(ax, variable='output', log=True)
plt.show()
Explanation: Example: More complicated impulse responses are possible
Note that by defining impulses as dictionaries, one can analyze extremely general shocks. For example, suppose that an exogenous 5% increase in the growth rate of technology was accompanied by a simultaneous 10% fall in the savings rate.
End of explanation
from IPython.html.widgets import fixed, interact, FloatSliderWidget
def interactive_impulse_response(model, shock, param, variable, kind, log_scale):
Interactive impulse response plotting tool.
# specify the impulse response
model.irf.impulse = {param: shock * model.params[param]}
model.irf.kind = kind
# create the plot
fig, ax = plt.subplots(1, 1, figsize=(8,6))
model.irf.plot_impulse_response(ax, variable=variable, log=log_scale)
irf_widget = interact(interactive_impulse_response,
model=fixed(ces_model),
shock = FloatSliderWidget(min=0.1, max=5.0, step=0.1, value=0.5),
param = ces_model.params.keys(),
variable=['capital', 'output', 'consumption', 'investment'],
kind=['efficiency_units', 'per_capita', 'levels'],
log_scale=False,
)
Explanation: Example: Interactive impulse reponse functions
Using IPython widgets makes it extremely easy to analyze the various impulse response functions.
End of explanation |
9,079 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applied example of scraping the Handbook of Birds of the World to get a list of subspecies for a given bird species.
Step1: Introspection of the source HTML of the species web page reveals that the sub-species listings fall within a section (div in HTML lingo) labeled "<div class="ds-ssp_comp>" in the HTML. So we'll search the 'soup' for this section, which returns a list of one object, then we extract that one object to a variable named subSection.
https
Step2: All the entries with the tag <em> are the subspecies entries.
Step3: We can loop through each subspecies found and print its name | Python Code:
#Import modules
import requests
from bs4 import BeautifulSoup
#Example URL
theURL = "https://www.hbw.com/species/brown-wood-owl-strix-leptogrammica"
#Get content of the species web page
response = requests.get(theURL)
#Convert to a "soup" object, which BS4 is designed to work with
soup = BeautifulSoup(response.text,'lxml')
Explanation: Applied example of scraping the Handbook of Birds of the World to get a list of subspecies for a given bird species.
End of explanation
#Find all sections with the CSS class 'ds-ssp_comp' and get the first (only) item found
div = soup.find_all('div',class_='ds-ssp_comp')
section = div[0]
Explanation: Introspection of the source HTML of the species web page reveals that the sub-species listings fall within a section (div in HTML lingo) labeled "<div class="ds-ssp_comp>" in the HTML. So we'll search the 'soup' for this section, which returns a list of one object, then we extract that one object to a variable named subSection.
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#searching-by-css-class
End of explanation
#Find all lines in the section with the tag 'em'
subSpecies = section.find_all('em')
Explanation: All the entries with the tag <em> are the subspecies entries.
End of explanation
#Extract to a variable
for subSpp in subSpecies:
print (subSpp.get_text())
Explanation: We can loop through each subspecies found and print its name
End of explanation |
9,080 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Cadenas o strings
En Python las cadenas son definidas como listas de caracteres, por lo que es posible aplicarles rebanado y las demás operaciones que vimos en la sección anterior.
Una cadena se puede formar usando comillas dobles o sencillas, de la siguiente manera
Step1: En este caso, los operadores + y * dan los siguientes resultados
Step2: Sin embargo, las cadenas no pueden ser modificadas, es decir, no les puede asignar nuevos elementos como a las listas y por tanto son inmutables. Esto lo podemos constatar a continuación
Step3: Las cadenas tienen varios métodos que pueden ser de gran utilidad. A ellos se puede acceder colocando un punto después del nombre de la variable a la que se le haya asignado una cadena y oprimiendo la tecla <kbd>Tab</kbd>. Por ejemplo, si después de fruta colocamos un punto, veremos que aparece
Step4: Nota
Step5: count
Step6: replace
Step7: split
Step8: También puede dividir una cadena por un determinado carácter para partirla en varias subcadenas
Step9: Problemas
Problema 1
Tomar la variable dulce, hacer que se repita 50 veces, y separar las palabras con un espacio, de tal forma que obtengamos algo como lo siguiente, pero sin generar un espacio al final.
'bocadillo bocadillo ...'
Step10: Problema 2
¿Cuántas veces se repite la palabra banano en la siguiente cadena?
Step11: Respuesta
Step12: Problema 3
Cuántas veces se repite banano en la cadena anterior, sin importar si algunas de sus letras están en mayúsculas o no?
Respuesta
Step13: Problema 4
¿Qué produce el método center?
Experimentar con los siguientes comandos para ver que produce
Step14: Tuplas
Una tupla es un arreglo inmutable de distintos tipos de datos. Es decir, es como si fuera una lista y tiene sus mismas propiedades, pero al igual que las cadenas, no es posible modificar ninguno de sus valores.
Las tuplas se definen con paréntesis ( ) en lugar de corchetes. Un ejemplo de tupla sería
Step15: Pero no podemos modificar sus valores mediante nuevas asignaciones
Step16: Nota
Step17: Problemas
Problema 1
¿Es posible calcular el promedio a la lista de la siguiente tupla?
Step18: Problema 2
Crear una tupla que tenga un sólo elemento
Step19: Problema 3
¿Qué efecto tiene esta operación
Step20: dado el valor de tp1 definido arriba?
Step21: Teniendo en cuenta esto, explicar qué ocurre al realizar esta operación entre los elementos de una lista
Step22: Problema 4
¿Por qué, en cambio, esta operación falla?
Step23: Problema 5
¿Cómo se calcula el máximo de una tupla?
Step24: Diccionarios
Los diccionarios son una estructura de datos muy usada en Python. Ya hemos visto que los elementos de listas, cadenas y tuplas están indexados por números, es decir, li[0], fruta[1] o tp[2]. En su lugar, los diccionarios están indexados por claves (o keys en inglés), que pueden ser no sólo números, sino también cadenas, tuplas o cualquier otro tipo de datos que sea inmutable.
Lo interesante de los diccionarios es que nos sirven para relacionar dos tipos distintos de datos
Step25: Como podemos ver, los diccionarios se definen con llaves ({ }). Las claves
son los elementos que están a la izquierda de los
Step26: o para el de Juan
Step27: Si alguien cambia de contraseña, podemos actualizar nuestro diccionario fácilmente haciendo una nueva asignación, por ejemplo
Step28: Nota
Step29: Si queremos introducir el nombre y la contraseña de una nueva persona, sólo es
necesario usar una nueva clave y asignarle un valor, así
Step30: Para saber si una persona ya está en el diccionario o no, usamos el siguiente
método
Step31: Finalmente, para extraer todas las claves y los valores de un diccionario podemos usar los siguientes métodos
Step32: Problemas
Problema 1
Dado el siguiente diccionario que guarda las notas de distintos estudiantes
Step33: calcular
Step34: La nota promedio del curso
Respuesta
3.74
Step35: Conversión entre cadenas, tuplas, listas y diccionarios
Para convertir entre unos y otros tipos de datos, en Python se usan los siguientes comandos
Step36: list
Step37: Para los diccionarios, list sólo extrae las claves y no los valores
Step38: dict | Python Code:
fruta = "banano"
dulce = 'bocadillo'
Explanation: Cadenas o strings
En Python las cadenas son definidas como listas de caracteres, por lo que es posible aplicarles rebanado y las demás operaciones que vimos en la sección anterior.
Una cadena se puede formar usando comillas dobles o sencillas, de la siguiente manera:
End of explanation
fruta + dulce
fruta * 3
dulce[0]
dulce[:7]
dulce[::-1]
Explanation: En este caso, los operadores + y * dan los siguientes resultados:
| Operación | Uso | Resultado
| --------- | --------------- | ---------
| + | cadena + cadena | Une dos cadenas
| * | cadena * número | Repite una cadena tantas veces como sea el número
Con las dos variables arriba definidas podemos realizar, por ejemplo, las
siguientes operaciones:
End of explanation
fruta[2] = 'z'
Explanation: Sin embargo, las cadenas no pueden ser modificadas, es decir, no les puede asignar nuevos elementos como a las listas y por tanto son inmutables. Esto lo podemos constatar a continuación:
End of explanation
fruta.
Explanation: Las cadenas tienen varios métodos que pueden ser de gran utilidad. A ellos se puede acceder colocando un punto después del nombre de la variable a la que se le haya asignado una cadena y oprimiendo la tecla <kbd>Tab</kbd>. Por ejemplo, si después de fruta colocamos un punto, veremos que aparece:
End of explanation
fruta.upper()
Explanation: Nota: Ninguno de estos métodos modifican a la cadena original, pues como ya dijimos, las cadenas son inmutables.
Entre estos métodos, vamos a mirar que comportamiento tienen los siguientes:
upper: Convierte toda la cadena en mayúsculas
End of explanation
fruta.count('a')
Explanation: count: Cuenta cuantas veces se repite un carácter en una cadena
End of explanation
fruta.replace('a', 'o')
fruta.replace('ban', 'en')
Explanation: replace: Reemplaza un carácter o parte de una cadena por otro carácter o cadena
End of explanation
s = "Hola, mundo! Hola mundo"
s.split()
Explanation: split: Divide una cadena según los espacios que tenga y genera una lista de palabras.
End of explanation
dulce.split('d')
Explanation: También puede dividir una cadena por un determinado carácter para partirla en varias subcadenas:
End of explanation
# Escribir la solución aquí
Explanation: Problemas
Problema 1
Tomar la variable dulce, hacer que se repita 50 veces, y separar las palabras con un espacio, de tal forma que obtengamos algo como lo siguiente, pero sin generar un espacio al final.
'bocadillo bocadillo ...'
End of explanation
muchas_frutas = 'banAnobanAnobananobanaNobananobananobanaNobaNanobanano\
bananobananobaNanobananobananobaNanobAnanobananobananobanaNobananobanAno\
bananobananobanaNobananobananobananobananobananobananobananobananobAnAno\
bAnanobananobananobananobananobananobanANobananobananobanaNobananobanano\
bananobanaNobAnAnobananobananobananobananobananobAnAnobananobananobanano\
baNanobananobananobaNaNobananobANanobananobananobananobAnanobananobanano\
bananobananobAnanobananobaNAnobananobananobananobaNanobanaNobANanobanano\
baNanobananobananobAnanobananobananobananobaNAnobananobanANobananobAnano\
bANanobanAnobananobaNanobananobananobananobananobananobananobAnanobanano\
bananobanAnobananobananobanAnobananobananobananobanAnobananobananobaNano\
bAnanobananobAnanobaNanobananobanaNobananobananobanANobananobananobANAno\
bananobananobaNAnobanaNobAnanobanAnobananobananobanAnobaNanobananobanaNo\
banaNobANAnobananobananobanAnobananobananobanANobananobanAnobananobanano\
banaNobananobAnanobananobAnanobananobanANobananobananobanAnobanaNobanano\
bananobAnanobananobaNanobananobanANobananobananobananobaNAnobananobanAno\
bananobananobananobaNanobananobananobanAnobananobananobANanobananobanano\
bananobananobaNanobananobananobananobAnanobananobananobananobananobanano\
bananobanANobananobanaNobAnanobananobaNanobaNAnobananobananobananobanano\
bananobananobananobananobananobAnanobanaNobananobananobaNAnobananobanANo\
bananobanaNobananobananobananobananobananobaNanobananobanaNobanAnobanAno\
bananobanAno'
Explanation: Problema 2
¿Cuántas veces se repite la palabra banano en la siguiente cadena?:
End of explanation
# Escribir la solución aquí
Explanation: Respuesta:
150
End of explanation
# Escribir la solución aquí
Explanation: Problema 3
Cuántas veces se repite banano en la cadena anterior, sin importar si algunas de sus letras están en mayúsculas o no?
Respuesta:
239
End of explanation
dulce.center(2)
dulce.center(10)
dulce.center(16)
dulce.center(30)
Explanation: Problema 4
¿Qué produce el método center?
Experimentar con los siguientes comandos para ver que produce:
End of explanation
tp = (1, 2, 3, 4, 'a')
tp[3]
tp[-1]
tp[2:]
Explanation: Tuplas
Una tupla es un arreglo inmutable de distintos tipos de datos. Es decir, es como si fuera una lista y tiene sus mismas propiedades, pero al igual que las cadenas, no es posible modificar ninguno de sus valores.
Las tuplas se definen con paréntesis ( ) en lugar de corchetes. Un ejemplo de tupla sería:
End of explanation
tp[2] = 'b'
Explanation: Pero no podemos modificar sus valores mediante nuevas asignaciones:
End of explanation
tp1 = 'a', 'b', 2
tp1
Explanation: Nota: Es posible omitir los paréntesis al momento de definir una tupla si así se desea, lo cual es una práctica bastante extendida entre los programadores de Python. Por ejemplo, una asignación válida es:
End of explanation
li = (3, 18, 17, 44, 14, 12, 29, 19, 4, 6, 17, 7, 14, 6, 8, 17, 17, 21, 65,\
19, 10, 31, 92, 17, 5, 15, 3, 14, 20, 12, 29, 57, 15, 2, 17, 1, 6, 17, 2,\
71, 12, 11, 62, 14, 9, 20, 43, 19, 4, 15)
# Escribir la solución aquí
Explanation: Problemas
Problema 1
¿Es posible calcular el promedio a la lista de la siguiente tupla?
End of explanation
# Escribir la solución aquí
Explanation: Problema 2
Crear una tupla que tenga un sólo elemento
End of explanation
x, y, z = tp1
Explanation: Problema 3
¿Qué efecto tiene esta operación
End of explanation
# Obtener los valores de x, y, z aquí
Explanation: dado el valor de tp1 definido arriba?
End of explanation
l = [-1, 6, 7, 9]
l[0], l[2] = l[2], l[0]
# Imprimir la lista l aquí
Explanation: Teniendo en cuenta esto, explicar qué ocurre al realizar esta operación entre los elementos de una lista
End of explanation
u, v = tp1
Explanation: Problema 4
¿Por qué, en cambio, esta operación falla?
End of explanation
# Escribir la solución aquí
Explanation: Problema 5
¿Cómo se calcula el máximo de una tupla?
End of explanation
codigos = {'Luis': 2257, 'Juan': 9739, 'Carlos': 5591}
Explanation: Diccionarios
Los diccionarios son una estructura de datos muy usada en Python. Ya hemos visto que los elementos de listas, cadenas y tuplas están indexados por números, es decir, li[0], fruta[1] o tp[2]. En su lugar, los diccionarios están indexados por claves (o keys en inglés), que pueden ser no sólo números, sino también cadenas, tuplas o cualquier otro tipo de datos que sea inmutable.
Lo interesante de los diccionarios es que nos sirven para relacionar dos tipos distintos de datos: las claves con sus valores (o values en inglés), que pueden ser mutables o inmutables.
Por ejemplo, supongamos que queremos guardar los códigos que varias personas están utilizando para entrar a un servicio web. Esto lo podemos hacer muy fácilmente con un diccionario, en el que las claves sean el nombre de cada persona y sus valores sean las contraseñas que estén usando.
Para ello, en Python podemos escribir algo como:
End of explanation
codigos['Carlos']
Explanation: Como podemos ver, los diccionarios se definen con llaves ({ }). Las claves
son los elementos que están a la izquierda de los :, mientras que los que
están a la derecha son los valores.
Como ya se mencionó, para extraer un elemento de un diccionario es necesario usar alguna de sus claves. En nuestro caso, las claves son los nombres de las personas. Por ejemplo, para extraer el código que le corresponde a Carlos debemos escribir:
End of explanation
codigos['Juan']
Explanation: o para el de Juan
End of explanation
codigos['Luis'] = 1627
codigos
Explanation: Si alguien cambia de contraseña, podemos actualizar nuestro diccionario fácilmente haciendo una nueva asignación, por ejemplo:
End of explanation
codigos.pop('Juan')
codigos
Explanation: Nota: Los diccionarios no tienen un orden interno por defecto. En el último ejemplo podemos ver como 'Luis' aparece al final del diccionario, mientras que en la primera definición de códigos aparecía al principio. No hay que preocuparse por ello.
O si una persona se retira del servicio, podemos eliminarla del diccionario
usando el siguiente comando:
End of explanation
codigos['Jorge'] = 6621
codigos
Explanation: Si queremos introducir el nombre y la contraseña de una nueva persona, sólo es
necesario usar una nueva clave y asignarle un valor, así
End of explanation
'Carlos' in codigos.keys()
'José' in codigos.keys()
Explanation: Para saber si una persona ya está en el diccionario o no, usamos el siguiente
método:
End of explanation
codigos.keys()
codigos.values()
Explanation: Finalmente, para extraer todas las claves y los valores de un diccionario podemos usar los siguientes métodos:
End of explanation
notas = {
'Juan': [4.5, 3.7, 3.4, 5],
'Alicia': [3.5, 3.1, 4.2, 3.9],
'Germán': [2.6, 3.0, 3.9, 4.1]
}
Explanation: Problemas
Problema 1
Dado el siguiente diccionario que guarda las notas de distintos estudiantes
End of explanation
# Escribir la solución aquí
Explanation: calcular:
La nota promedio de Juan (recuerde que se puede utilizar sum y len para obtener el promedio).
Respuesta
4.15
End of explanation
# Escribir la solución aquí
Explanation: La nota promedio del curso
Respuesta
3.74
End of explanation
str(36.1)
str([1,2,3])
Explanation: Conversión entre cadenas, tuplas, listas y diccionarios
Para convertir entre unos y otros tipos de datos, en Python se usan los siguientes comandos:
str: Convierte números y cualquier otro objeto a una cadena.
End of explanation
list((3, 2, 4))
list('1457')
Explanation: list: Convierte tuplas, diccionarios y cadenas a una lista.
End of explanation
list({'a': 12, 'b': 5})
Explanation: Para los diccionarios, list sólo extrae las claves y no los valores
End of explanation
dict([[10, 'a'], [15, 't']])
Explanation: dict: Convierte una lista de listas, donde cada una tiene dos elementos, a un diccionario.
End of explanation |
9,081 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
Two images are produced, one with a good channel and one with a channel
that does not show any evoked field.
It is also demonstrated how to reorder the epochs using a 1D spectral
embedding as described in
Step1: Set parameters
Step2: Show event-related fields images | Python Code:
# Authors: Alexandre Gramfort <[email protected]>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Visualize channel over epochs as an image
This will produce what is sometimes called an event related
potential / field (ERP/ERF) image.
Two images are produced, one with a good channel and one with a channel
that does not show any evoked field.
It is also demonstrated how to reorder the epochs using a 1D spectral
embedding as described in :footcite:GramfortEtAl2010.
End of explanation
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_filt-0-40_raw.fif'
event_fname = meg_path / 'sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.4
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
raw.info['bads'] = ['MEG 2443', 'EEG 053']
# Create epochs, here for gradiometers + EOG only for simplicity
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=('grad', 'eog'), baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
Explanation: Set parameters
End of explanation
# and order with spectral reordering
# If you don't have scikit-learn installed set order_func to None
from sklearn.manifold import spectral_embedding # noqa
from sklearn.metrics.pairwise import rbf_kernel # noqa
def order_func(times, data):
this_data = data[:, (times > 0.0) & (times < 0.350)]
this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]
return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),
n_components=1, random_state=0).ravel())
good_pick = 97 # channel with a clear evoked response
bad_pick = 98 # channel with no evoked response
# We'll also plot a sample time onset for each trial
plt_times = np.linspace(0, .2, len(epochs))
plt.close('all')
mne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,
order=order_func, vmin=-250, vmax=250,
overlay_times=plt_times, show=True)
Explanation: Show event-related fields images
End of explanation |
9,082 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 2
Step1: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
Step2: Problem set 1
Step3: Problem set 2
Step4: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output
Step5: Problem set 3
Step6: Problem set 4
Step7: BONUS | Python Code:
import pg8000
conn = pg8000.connect(database="homework2")
Explanation: Homework 2: Working with SQL (Data and Databases 2016)
This homework assignment takes the form of an IPython Notebook. There are a number of exercises below, with notebook cells that need to be completed in order to meet particular criteria. Your job is to fill in the cells as appropriate.
You'll need to download this notebook file to your computer before you can complete the assignment. To do so, follow these steps:
Make sure you're viewing this notebook in Github.
Ctrl+click (or right click) on the "Raw" button in the Github interface, and select "Save Link As..." or your browser's equivalent. Save the file in a convenient location on your own computer.
Rename the notebook file to include your own name somewhere in the filename (e.g., Homework_2_Allison_Parrish.ipynb).
Open the notebook on your computer using your locally installed version of IPython Notebook.
When you've completed the notebook to your satisfaction, e-mail the completed file to the address of the teaching assistant (as discussed in class).
Setting the scene
These problem sets address SQL, with a focus on joins and aggregates.
I've prepared a SQL version of the MovieLens data for you to use in this homework. Download this .psql file here. You'll be importing this data into your own local copy of PostgreSQL.
To import the data, follow these steps:
Launch psql.
At the prompt, type CREATE DATABASE homework2;
Connect to the database you just created by typing \c homework2
Import the .psql file you downloaded earlier by typing \i followed by the path to the .psql file.
After you run the \i command, you should see the following output:
CREATE TABLE
CREATE TABLE
CREATE TABLE
COPY 100000
COPY 1682
COPY 943
The table schemas for the data look like this:
Table "public.udata"
Column | Type | Modifiers
-----------+---------+-----------
user_id | integer |
item_id | integer |
rating | integer |
timestamp | integer |
Table "public.uuser"
Column | Type | Modifiers
------------+-----------------------+-----------
user_id | integer |
age | integer |
gender | character varying(1) |
occupation | character varying(80) |
zip_code | character varying(10) |
Table "public.uitem"
Column | Type | Modifiers
--------------------+------------------------+-----------
movie_id | integer | not null
movie_title | character varying(81) | not null
release_date | date |
video_release_date | character varying(32) |
imdb_url | character varying(134) |
unknown | integer | not null
action | integer | not null
adventure | integer | not null
animation | integer | not null
childrens | integer | not null
comedy | integer | not null
crime | integer | not null
documentary | integer | not null
drama | integer | not null
fantasy | integer | not null
film_noir | integer | not null
horror | integer | not null
musical | integer | not null
mystery | integer | not null
romance | integer | not null
scifi | integer | not null
thriller | integer | not null
war | integer | not null
western | integer | not null
Run the cell below to create a connection object. This should work whether you have pg8000 installed or psycopg2.
End of explanation
conn.rollback()
Explanation: If you get an error stating that database "homework2" does not exist, make sure that you followed the instructions above exactly. If necessary, drop the database you created (with, e.g., DROP DATABASE your_database_name) and start again.
In all of the cells below, I've provided the necessary Python scaffolding to perform the query and display the results. All you need to do is write the SQL statements.
As noted in the tutorial, if your SQL statement has a syntax error, you'll need to rollback your connection before you can fix the error and try the query again. As a convenience, I've included the following cell, which performs the rollback process. Run it whenever you hit trouble.
End of explanation
cursor = conn.cursor()
statement = "SELECT movie_title, release_date FROM uitem WHERE scifi = 1 AND horror = 1 ORDER BY release_date DESC;
"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 1: WHERE and ORDER BY
In the cell below, fill in the string assigned to the variable statement with a SQL query that finds all movies that belong to both the science fiction (scifi) and horror genres. Return these movies in reverse order by their release date. (Hint: movies are located in the uitem table. A movie's membership in a genre is indicated by a value of 1 in the uitem table column corresponding to that genre.) Run the cell to execute the query.
Expected output:
Deep Rising (1998)
Alien: Resurrection (1997)
Hellraiser: Bloodline (1996)
Robert A. Heinlein's The Puppet Masters (1994)
Body Snatchers (1993)
Army of Darkness (1993)
Body Snatchers (1993)
Alien 3 (1992)
Heavy Metal (1981)
Alien (1979)
Night of the Living Dead (1968)
Blob, The (1958)
End of explanation
cursor = conn.cursor()
statement = "SELECT count(*) from uitem WHERE musical = 1 or childrens =1;"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 2: Aggregation, GROUP BY and HAVING
In the cell below, fill in the string assigned to the statement variable with a SQL query that returns the number of movies that are either musicals or children's movies (columns musical and childrens respectively). Hint: use the count(*) aggregate.
Expected output: 157
End of explanation
cursor = conn.cursor()
statement = "SELECT uuser.occupation, count(*) FROM uuser GROUP BY occupation HAVING count(*) > 50;
"
cursor.execute(statement)
for row in cursor:
print(row[0], row[1])
Explanation: Nicely done. Now, in the cell below, fill in the indicated string with a SQL statement that returns all occupations, along with their count, from the uuser table that have more than fifty users listed for that occupation. (I.e., the occupation librarian is listed for 51 users, so it should be included in these results. There are only 12 lawyers, so lawyer should not be included in the result.)
Expected output:
administrator 79
programmer 66
librarian 51
student 196
other 105
engineer 67
educator 95
Hint: use GROUP BY and HAVING. (If you're stuck, try writing the query without the HAVING first.)
End of explanation
cursor = conn.cursor()
statement = "SELECT distinct(uitem.movie_title) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE udata.rating = 5 AND uitem.documentary = 1 AND uitem.release_date < '1992-01-01';"
cursor.execute(statement)
for row in cursor:
print(row[0])
Explanation: Problem set 3: Joining tables
In the cell below, fill in the indicated string with a query that finds the titles of movies in the Documentary genre released before 1992 that received a rating of 5 from any user. Expected output:
Madonna: Truth or Dare (1991)
Koyaanisqatsi (1983)
Paris Is Burning (1990)
Thin Blue Line, The (1988)
Hints:
JOIN the udata and uitem tables.
Use DISTINCT() to get a list of unique movie titles (no title should be listed more than once).
The SQL expression to include in order to find movies released before 1992 is uitem.release_date < '1992-01-01'.
End of explanation
cursor = conn.cursor()
statement = "SELECT uitem.movie_title, avg(udata.rating) FROM uitem JOIN udata ON uitem.movie_id = udata.item_id WHERE uitem.horror = 1 GROUP BY uitem.movie_title HAVING count(udata.rating) >= 10 ORDER BY avg(udata.rating) limit 10;"
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: Problem set 4: Joins and aggregations... together at last
This one's tough, so prepare yourself. Go get a cup of coffee. Stretch a little bit. Deep breath. There you go.
In the cell below, fill in the indicated string with a query that produces a list of the ten lowest rated movies in the Horror genre. For the purposes of this problem, take "lowest rated" to mean "has the lowest average rating." The query should display the titles of the movies, not their ID number. (So you'll have to use a JOIN.)
Expected output:
Amityville 1992: It's About Time (1992) 1.00
Beyond Bedlam (1993) 1.00
Amityville: Dollhouse (1996) 1.00
Amityville: A New Generation (1993) 1.00
Amityville 3-D (1983) 1.17
Castle Freak (1995) 1.25
Amityville Curse, The (1990) 1.25
Children of the Corn: The Gathering (1996) 1.32
Machine, The (1994) 1.50
Body Parts (1991) 1.62
End of explanation
cursor = conn.cursor()
statement = ""
cursor.execute(statement)
for row in cursor:
print(row[0], "%0.2f" % row[1])
Explanation: BONUS: Extend the query above so that it only includes horror movies that have ten or more ratings. Fill in the query as indicated below.
Expected output:
Children of the Corn: The Gathering (1996) 1.32
Body Parts (1991) 1.62
Amityville II: The Possession (1982) 1.64
Jaws 3-D (1983) 1.94
Hellraiser: Bloodline (1996) 2.00
Tales from the Hood (1995) 2.04
Audrey Rose (1977) 2.17
Addiction, The (1995) 2.18
Halloween: The Curse of Michael Myers (1995) 2.20
Phantoms (1998) 2.23
End of explanation |
9,083 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analysis_v3 requires
Step1: Create processing pipeline
<br> <br>
Requires
Step2: SweepPoints
Step3: Measured-objects-sweep-points map
Step4: Ramsey
single qubit
Step5: Create pipeline
Step6: multi-qubit
Step7: TwoQubit RB
multiple files
Step8: Rabi
multi-qubit | Python Code:
from pycqed.analysis_v3.processing_pipeline import ProcessingPipeline
# [
# {'node_name1': function_name1, keys_in: keys_in_list1, **node_params1},
# {'node_name2': function_name2, keys_in: keys_in_list2, **node_params2},
# .
# .
# .
# {'node_nameN': function_nameN, keys_in: keys_in_listN, **node_paramsN}
# ]
Explanation: Analysis_v3 requires:
1. ProcessingPipeline object
2. measured object(s)
3. measured-objects-value-names map
4. SweepPoints object
5. measured-objects-sweep-points map
6. CalibrationPoints object (written by Nathan Lacroix)
Processing Pipeline
Measured object(s)
Measured-objects-value-names-map
End of explanation
# ProcessingPipeline(node_name,
# **node_params)
pp = ProcessingPipeline('average_data',
keys_in='raw', shape=(10, 3), averaging_axis=1, meas_obj_names='qb1')
pp
pp.add_node('rotate_iq', keys_in='raw', meas_obj_names='qb2', num_keys_out=1)
pp.add_node('ramsey_analysis', keys_in='previous', keys_out=None, meas_obj_names='qb2')
pp
# finalize pipeline -> requires measured-objects-value-names map
# helper function for multi-qubit experiments -> requires (virtual) qubit objects + detector functions
qubits = [qb1, qb2, qb3]
for i, qb in enumerate(qubits):
qb.acq_I_channel(2*i)
qb.acq_Q_channel(2*i + 1)
qb.update_detector_functions()
det_func = mqm.get_multiplexed_readout_detector_functions(qubits)['int_avg_det']
mqm.get_meas_obj_value_names_map(qubits, det_func)
det_func = mqm.get_multiplexed_readout_detector_functions(qubits)['int_avg_classif_det']
mqm.get_meas_obj_value_names_map(qubits, det_func)
det_func = mqm.get_multiplexed_readout_detector_functions(
qubits, det_get_values_kws={'correlated': True})['int_avg_classif_det']
mqm.get_meas_obj_value_names_map(qubits, det_func)
# let's use:
det_func = mqm.get_multiplexed_readout_detector_functions(qubits)['int_avg_det']
movnm = mqm.get_meas_obj_value_names_map(qubits, det_func)
movnm
pp
# finalize pipeline
pp(movnm)
pp
Explanation: Create processing pipeline
<br> <br>
Requires:
- measured object(s): ['qb1', 'qb2'], 'qb3', 'TWPA', 'dummy' etc. -> completely up to the user
- measured-objects-value-names map (i.e. channel map, {meas_obj_names: [ro_channels]})
End of explanation
from pycqed.measurement.sweep_points import SweepPoints
# The SweepPoints object is a list of dictionaries of the form:
# [
# # 1st sweep dimension
# {param_name0: (values, unit, plot_label),
# param_name1: (values, unit, plot_label),
# ...
# param_nameN: (values, unit, plot_label)},
# # 2nd sweep dimension
# {param_name0: (values, unit, plot_label),
# param_name1: (values, unit, plot_label),
# ...
# param_nameN: (values, unit, plot_label)},
# .
# .
# .
# # D-th sweep dimension
# {param_name0: (values, unit, plot_label),
# param_name1: (values, unit, plot_label),
# ...
# param_nameN: (values, unit, plot_label)},
# ]
# hard sweep (first sweep dimension): pulse delays
sp = SweepPoints('delay_qb1', np.linspace(0, 1e-6, 3), 's', 'Pulse delay, $\\tau$')
sp
sp.add_sweep_dimension()
sp
# soft sweep (2nd sweep dimension): pulse amplitudes
sp.add_sweep_parameter(f'amps_qb1', np.linspace(0, 1, 3), 'V', 'Pulse amplitude, $A$')
sp
# 2D sweep for 3 qubits
# first (hard) sweep dimension: pulse delay
sp = SweepPoints()
sp.add_sweep_parameter('lengths_qb1', np.linspace(10e-9, 1e-6, 3), 's', 'Pulse delay, $\\tau$')
sp.add_sweep_parameter('lengths_qb2', np.linspace(10e-9, 1e-6, 3), 's', 'Pulse delay, $\\tau$')
sp.add_sweep_parameter('lengths_qb3', np.linspace(10e-9, 1e-6, 3), 's', 'Pulse delay, $\\tau$')
sp
# second (soft) sweep dimension: pulse amplitude
sp.add_sweep_dimension()
for qb in ['qb1', 'qb2', 'qb3']:
sp.add_sweep_parameter(f'amps_{qb}', np.linspace(0, 1, 3), 'V', 'Pulse amplitude, $A$')
sp
Explanation: SweepPoints
End of explanation
mospm = sp.get_sweep_points_map(['qb1', 'qb2', 'qb3'])
mospm
Explanation: Measured-objects-sweep-points map
End of explanation
timestamp = '20200317_231624'
reload(hlp_mod)
data_file = hlp_mod.get_data_file_from_timestamp(timestamp)
sweep_points = np.array(data_file['Experimental Data']['Experimental Metadata']['sweep_points_dict']['qb2'])
data_file.close()
# OR
sweep_points = hlp_mod.get_param_from_metadata_group('sweep_points_dict', timestamp)['qb2']
meas_object = 'qb2'
SP = SweepPoints('delays_' + meas_object, sweep_points, 's', 'Delay, $\\tau$')
meas_obj_value_names_map = {meas_object: hlp_mod.get_value_names_from_timestamp(timestamp)}
meas_obj_sweep_points_map = SP.get_sweep_points_map([meas_object])
Explanation: Ramsey
single qubit
End of explanation
# "raw" pipeline
reload(ppmod)
pp = ppmod.ProcessingPipeline()
pp.add_node('rotate_iq', keys_in='raw', meas_obj_names=[meas_object], num_keys_out=1)
pp.add_node('ramsey_analysis', keys_in='previous rotate_iq', keys_out=None, meas_obj_names=[meas_object])
pp
pp(meas_obj_value_names_map)
pp
data_dict = pla.extract_data_hdf(timestamp)
data_dict.keys()
data_dict.update(OrderedDict({
'sweep_points': SP,
'meas_obj_value_names_map': meas_obj_value_names_map,
'meas_obj_sweep_points_map': meas_obj_sweep_points_map,
'artificial_detuning_dict': {meas_object: 0.5e6},
}))
pla.process_pipeline(data_dict, processing_pipeline=pp)
data_dict.keys()
data_dict['qb2']
Explanation: Create pipeline
End of explanation
timestamp = '20191118_183801'
movnm = hlp_mod.get_param_from_metadata_group('meas_obj_value_names_map', timestamp)
reload(ppmod)
pp = ppmod.ProcessingPipeline()
pp.add_node('rotate_iq', keys_in='raw', meas_obj_names=list(movnm), num_keys_out=1)
pp.add_node('ramsey_analysis', keys_in='previous rotate_iq', keys_out=None,
meas_obj_names=list(movnm))
pp
pp(movnm)
pp
data_dict = pla.extract_data_hdf(timestamp)
data_dict.update(OrderedDict({
'artificial_detuning_dict': {meas_object: 2e6 for meas_object in movnm},
}))
pla.process_pipeline(data_dict, processing_pipeline=pp)
data_dict.keys()
data_dict['qb1']
Explanation: multi-qubit
End of explanation
t_start = '20191103_174901'
t_stop = '20191103_183000'
data_dict = pla.get_timestamps(t_start=t_start, t_stop=t_stop)
data_dict
sweep_points = hlp_mod.get_param_from_metadata_group('sweep_points', data_dict['timestamps'][-1])
ncl = sweep_points[1]['cliffords'][0]
nr_seeds_per_file = len(sweep_points[0]['nr_seeds'][0])
nr_files = len(data_dict['timestamps'])
print(ncl)
print(nr_seeds_per_file)
print(nr_files)
movnm = hlp_mod.get_param_from_metadata_group('meas_obj_value_names_map', data_dict['timestamps'][-1])
movnm
reload(ppmod)
pp = ppmod.ProcessingPipeline()
pp.add_node('average_data', keys_in='raw',
shape=(nr_files*len(ncl), nr_seeds_per_file),
meas_obj_names=list(movnm))
pp.add_node('get_std_deviation', keys_in='raw',
shape=(nr_files*len(ncl), nr_seeds_per_file),
meas_obj_names=list(movnm))
pp.add_node('average_data', keys_in=f'previous average_data',
shape=(nr_files, len(ncl)), averaging_axis=0, meas_obj_names=list(movnm))
pp.add_node('get_std_deviation', keys_in=f'previous get_std_deviation',
shape=(nr_files, len(ncl)), averaging_axis=0, meas_obj_names=list(movnm))
pp.add_node('rb_analysis', meas_obj_names=list(movnm),
keys_out=None, d=4,
keys_in=f'previous average_data1',
keys_in_std=f'previous get_std_deviation1')
pp(movnm)
pp
reload(a_tools)
a_tools.datadir = data_folder
reload_anav3()
pla.search_modules
data_dict = pla.extract_data_hdf(data_dict=data_dict, append_data=True, replace_data=False)
data_dict.keys()
pla.process_pipeline(data_dict, processing_pipeline=pp, save_processed_data=True, save_figures=False)
data_dict.keys()
data_dict['qb1']
# plot raw data
pp = ppmod.ProcessingPipeline('prepare_1d_raw_data_plot_dicts', keys_in='raw', keys_out=None,
meas_obj_names=list(movnm), sp_name='cliffords',
xvals=np.tile(np.repeat(ncl, nr_seeds_per_file), nr_files),
do_plotting=True)#, plot_params={'linestyle': ''})
pp(movnm)
pp
pla.process_pipeline(data_dict, processing_pipeline=pp, save_processed_data=True, save_figures=True)
data_dict.keys()
save_module.Save(data_dict=data_dict, save_processed_data=False, save_figures=True)
Explanation: TwoQubit RB
multiple files
End of explanation
timestamp = '20191118_181845'
movnm = hlp_mod.get_param_from_metadata_group('meas_obj_value_names_map', timestamp)
print(movnm)
reload(ppmod)
pp = ppmod.ProcessingPipeline()
pp.add_node('rotate_iq', keys_in='raw', meas_obj_names=list(movnm), num_keys_out=1)
pp.add_node('rabi_analysis', keys_in='previous rotate_iq', keys_out=None,
meas_obj_names=list(movnm))
pp
pp(movnm)
pp
reload_anav3()
pla.search_modules
data_dict = pla.extract_data_hdf(timestamp)
pla.process_pipeline(data_dict, processing_pipeline=pp)
Explanation: Rabi
multi-qubit
End of explanation |
9,084 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Tutorial
Eine minimale Einführung in Python für Studierende mit Programmiererfahrung die keinen Anspruch auf Vollständigkeit erhebt.
Ausführlichere Einführungen und Tutorials finden sich an zahlreichen Stellen im Internet. Beispielsweise
Learn Python the Hard Way
Software Carpentry
PeP et al. Toolbox Workshop
Python.org Beginners Guide
Installation
Um Jupyter (formals IPython) Notebooks interaktiv auf deinem Computer benutzen zu können, musst du zunächst ein paar Pakete installieren. Falls du Linux oder Mac OS benutzt und dich etwas auskennst, kannst du einfach deinen Paketmanager verwenden. Für diesen Kurs verwenden wir Python 3.5 mit den Paketen numpy, scipy und matplotlib. Zusätzlich installiere bitte das jupyter Paket um die interaktiven Notebooks nutzen zu können. Falls dir das zu viel Aufwand ist, gibt es den Anaconda Installer der alle nötigen Pakete mitbringt und auf allen gängigen Plattformen funktioniert. Eine ausführliche Anleitung dafür findet sich auf den Seiten des PeP et al. Toolbox Workshops.
Interaktives Notebook
Dieses interaktive Notebook enthält ausführbare Zellen. Um eine Zelle auszuführen, bringe sie in den Fokus (z.B. durch Anklicken) und drücke Strg+Enter.
Step1: Variablen, Datentypen, Operatoren
Python hat ein dynamisches Typsystem, das bedeutet dass sich der Typ einer Variablen von Zuweisung zu Zuweisung ändern kann. Die eingebauten Typen von Python sind
Der nullwertige Typ NoneType und sein einziger Wert None
Den wahrheitswertigen Typen bool und seine Werte True und False
Numerische Typen
int z.B. 42
float z.B. 3.1415
complex z.B. 2+3j
Sequenzen
string z.B. 'Hallo, Welt!'
list z.B. [1, 2, 3]
tuple z.B. (1, 'a', 3.1415)
Den Abbildungstypen dict, z.B. {'name'
Step2: Jeder dieser Typen lässt sich in einem wahrheitswertigen Kontext verwenden. In einem solchen ist beispielsweise ein leerer String oder eine leere Liste gleichbedeutend mit False.
Step3: Mit Hilfe von Operatoren lassen sich Typen verknüpfen. Für numerische Typen gibt es beispielsweise die arithmetischen Operatoren
+ Addition
- Subtraktion
* Multiplikation
/ Division
// ganzzahlige Division
% Restwertbildung
** Potenzieren
Step4: Eine vollständige Übersicht der Typen und verfügbaren Operationen findet sich in der offiziellen Dokumentation.
Konstrollstrukturen, Schleifen, Iteration
Auch in Python gibt es if-Verzweigungen und (selten benutzte) while-Schleifen
Step5: Eine for-Schleife wie in C gibt es in Python nicht. Das for-Schlüsselwort kommt immer in Begleitung seines Freundes in; die beiden sind unzertrennlich. Damit lässt sich über Sequenzen iterieren.
Step6: Das liest sich doch deutlich angenehmer als die while-Schleife, oder? Falls du mal explizit die Indices einer Sequenz brauchst, hilft die enumerate Funktion.
Step8: Funktionen
Mit print, range und enumerate haben wir bereits Beispiele für Funktionen und die Aufrufsyntax gesehen. Um eigenen Funktionen zu definieren verwenden wir das def Schlüsselwort. Außerdem geben wir der Funktion eine Beschreibung. Die Beschreibung ist zwar optional, kann aber immens zum Verständnis des Codes beitragen. Sie wird für die automatisch generierte Hilfe der Funktion verwendet. Sie sollte in Englisch sein um unseren Code portabel zu halten.
Step9: Schau dir doch mal die Hilfe zu print an und finde heraus, wie wir verhindern können, dass nach jedem Aufruf von print eine neue Zeile begonnen wird. Passe dann die folgende Zelle so an, dass alle Zahlen durch Leerzeichen getrennt in der selben Zeile erscheinen.
Step10: Funktionen können mehrere Argumente annehmen. Dabei wird zwischen Argumenten und Schlüsselwortargumenten unterschieden. Primitive Datentypen wie int können als Schlüsselwortargument vordefiniert werden, da sie "by value" übergeben werden. Komplexere Datentypen wie zum Beispiel Listen werden als Referenzen übergeben was zur Folge hat, dass ein Standardwert innerhalb der Funktion verändert werden könnte. Deswegen wird i.d.R. das Verfahren aus dem untenstehenden Beispiel verwendet.
Step11: Comprehension
Listen, bzw. Sequenzen allgemein, aus Ausdrücken generieren zu können ist eines der stärksten Features von Python. Um zum Beispiel eine Liste der Quadrate aller Zahlen in einer anderen Liste zu berechnen benötigen wir nur eine einzige Zeile Code
Step12: Auf die gleiche Art lassen sich auch Sequenzen filtern.
Step13: Das Ganze lässt sich natürlich kombinieren und verschachteln. | Python Code:
print('Hello, world!')
Explanation: Python Tutorial
Eine minimale Einführung in Python für Studierende mit Programmiererfahrung die keinen Anspruch auf Vollständigkeit erhebt.
Ausführlichere Einführungen und Tutorials finden sich an zahlreichen Stellen im Internet. Beispielsweise
Learn Python the Hard Way
Software Carpentry
PeP et al. Toolbox Workshop
Python.org Beginners Guide
Installation
Um Jupyter (formals IPython) Notebooks interaktiv auf deinem Computer benutzen zu können, musst du zunächst ein paar Pakete installieren. Falls du Linux oder Mac OS benutzt und dich etwas auskennst, kannst du einfach deinen Paketmanager verwenden. Für diesen Kurs verwenden wir Python 3.5 mit den Paketen numpy, scipy und matplotlib. Zusätzlich installiere bitte das jupyter Paket um die interaktiven Notebooks nutzen zu können. Falls dir das zu viel Aufwand ist, gibt es den Anaconda Installer der alle nötigen Pakete mitbringt und auf allen gängigen Plattformen funktioniert. Eine ausführliche Anleitung dafür findet sich auf den Seiten des PeP et al. Toolbox Workshops.
Interaktives Notebook
Dieses interaktive Notebook enthält ausführbare Zellen. Um eine Zelle auszuführen, bringe sie in den Fokus (z.B. durch Anklicken) und drücke Strg+Enter.
End of explanation
# Dies ist ein Kommentar :)
x = 1 # x ist ein int
print(x)
x = 'Hallo, Welt!' # x ist jetzt ein string
print(x)
y = 3.1415 # y is ein float
print(y)
z = [1, 'a', 2.7182] # z ist eine (heterogene) Liste mit drei Einträgen
# Auch wenn es vom Typsystem nicht gefordert wird,
# ist es eine gute Idee, Listen nur homogen zu befüllen.
z = [1, 2, 3]
print(z)
print(z[1])
print(z[0:-1]) # Mit Hilfe von "Slices" können wir Teile von Listen addressieren.
# Die Syntax ist dabei {Anfang}:{Ende}:{Schrittweite} wobei negative
# Indices vom Ende der Liste gezählt werden.
(a, b) = (100, 'Zaphod') # Tuple können benutzt werden, um Ausdrücke zu entpacken
# und so effektiv mehrere Variablen gleichzeitig zuzuweisen
# Die Klammern können dabei weggelassen werden
a, b = 42, 'Ford'
print(a)
print(b)
x = 'Die Antwort ist {}.'.format(a) # Strings lassen sich formatieren, so können die Inhalte
# von Variablen bequem ausgegeben werden.
print(x)
Explanation: Variablen, Datentypen, Operatoren
Python hat ein dynamisches Typsystem, das bedeutet dass sich der Typ einer Variablen von Zuweisung zu Zuweisung ändern kann. Die eingebauten Typen von Python sind
Der nullwertige Typ NoneType und sein einziger Wert None
Den wahrheitswertigen Typen bool und seine Werte True und False
Numerische Typen
int z.B. 42
float z.B. 3.1415
complex z.B. 2+3j
Sequenzen
string z.B. 'Hallo, Welt!'
list z.B. [1, 2, 3]
tuple z.B. (1, 'a', 3.1415)
Den Abbildungstypen dict, z.B. {'name': 'Hans', 'alter': 42}
Den Mengentypen set, z.B. {1, 2, 3}
End of explanation
not []
Explanation: Jeder dieser Typen lässt sich in einem wahrheitswertigen Kontext verwenden. In einem solchen ist beispielsweise ein leerer String oder eine leere Liste gleichbedeutend mit False.
End of explanation
a = 1.337
b = a * 5
c = b ** 10
c
Explanation: Mit Hilfe von Operatoren lassen sich Typen verknüpfen. Für numerische Typen gibt es beispielsweise die arithmetischen Operatoren
+ Addition
- Subtraktion
* Multiplikation
/ Division
// ganzzahlige Division
% Restwertbildung
** Potenzieren
End of explanation
name = ''
if 5 == 3:
print('Irgendwas stimmt mit dem Universum nicht.')
elif name: # Hier wird die Wahrheitswertigkeit verwendet
print('Hallo, {}!'.format(name))
else: # Setze einen Namen ein, um nicht hier zu landen
print('Nun, das ist jetzt etwas peinlich…')
i = 0
while i < 5:
print(i)
i += 1
Explanation: Eine vollständige Übersicht der Typen und verfügbaren Operationen findet sich in der offiziellen Dokumentation.
Konstrollstrukturen, Schleifen, Iteration
Auch in Python gibt es if-Verzweigungen und (selten benutzte) while-Schleifen
End of explanation
names = ['Klaus', 'Dieter', 'Hans']
for name in names:
print('Hello {}'.format(name))
for i in range(5): # Praktisch, um Zahlenfolgen zu erzeugen.
print(i)
Explanation: Eine for-Schleife wie in C gibt es in Python nicht. Das for-Schlüsselwort kommt immer in Begleitung seines Freundes in; die beiden sind unzertrennlich. Damit lässt sich über Sequenzen iterieren.
End of explanation
for index, name in enumerate(names):
print('Person {} heißt {}.'.format(index, name))
Explanation: Das liest sich doch deutlich angenehmer als die while-Schleife, oder? Falls du mal explizit die Indices einer Sequenz brauchst, hilft die enumerate Funktion.
End of explanation
def square(x):
This function squares its input.
x - A value of a type that implements `**` (power).
return x ** 2
print(square(4))
help(square)
Explanation: Funktionen
Mit print, range und enumerate haben wir bereits Beispiele für Funktionen und die Aufrufsyntax gesehen. Um eigenen Funktionen zu definieren verwenden wir das def Schlüsselwort. Außerdem geben wir der Funktion eine Beschreibung. Die Beschreibung ist zwar optional, kann aber immens zum Verständnis des Codes beitragen. Sie wird für die automatisch generierte Hilfe der Funktion verwendet. Sie sollte in Englisch sein um unseren Code portabel zu halten.
End of explanation
for i in range(100):
print(i)
Explanation: Schau dir doch mal die Hilfe zu print an und finde heraus, wie wir verhindern können, dass nach jedem Aufruf von print eine neue Zeile begonnen wird. Passe dann die folgende Zelle so an, dass alle Zahlen durch Leerzeichen getrennt in der selben Zeile erscheinen.
End of explanation
greetings = {
'English': 'Hello, {}!',
'Deutsch': 'Hallo, {}!',
'Francais': 'Salut, {}!',
'Espagnol': '¡Hola, {}!'
}
def greet(name, language=None):
if language is None: # So wird ein Standardwert für Sequenztypen definiert
language = 'English'
greeting = greetings.get(language)
if greeting is None:
print("Well, this is embarassing. I don't speak {}.".format(language))
else:
print(greeting.format(name))
greet('William')
greet('Wilhelm', language='Deutsch')
greet('Guillaume', 'Francais') # Wenn die Reihenfolge stimmt, kann das Schlüsselwort
# auch weggelassen werden.
greet('Guillermo', language='Espagnol')
greet('Guglielmo', language='Italiano')
Explanation: Funktionen können mehrere Argumente annehmen. Dabei wird zwischen Argumenten und Schlüsselwortargumenten unterschieden. Primitive Datentypen wie int können als Schlüsselwortargument vordefiniert werden, da sie "by value" übergeben werden. Komplexere Datentypen wie zum Beispiel Listen werden als Referenzen übergeben was zur Folge hat, dass ein Standardwert innerhalb der Funktion verändert werden könnte. Deswegen wird i.d.R. das Verfahren aus dem untenstehenden Beispiel verwendet.
End of explanation
[x ** 2 for x in range(10)]
Explanation: Comprehension
Listen, bzw. Sequenzen allgemein, aus Ausdrücken generieren zu können ist eines der stärksten Features von Python. Um zum Beispiel eine Liste der Quadrate aller Zahlen in einer anderen Liste zu berechnen benötigen wir nur eine einzige Zeile Code
End of explanation
[x for x in range(10) if x % 2 != 0]
Explanation: Auf die gleiche Art lassen sich auch Sequenzen filtern.
End of explanation
odd_squares = [x ** 2 for x in range(10) if x % 2 == 1]
print(odd_squares)
nums = [z for z in [x + y for x in range(3) for y in range(3, 6)] if z < 6]
print(nums)
Explanation: Das Ganze lässt sich natürlich kombinieren und verschachteln.
End of explanation |
9,085 | Given the following text description, write Python code to implement the functionality described.
Description:
Maximize sum of absolute difference between adjacent elements in Array with sum K
Function for maximising the sum ; Difference is 0 when only one element is present in array ; Difference is K when two elements are present in array ; Otherwise ; Driver code
| Python Code:
def maxAdjacentDifference(N , K ) :
if(N == 1 ) :
return 0 ;
if(N == 2 ) :
return K ;
return 2 * K ;
N = 6 ;
K = 11 ;
print(maxAdjacentDifference(N , K ) ) ;
|
9,086 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Converting incoming CDX files to Parquet
Quick look at file sizes
Step1: Note
Step2: Load in the unzipped file, filetering out any line that starts with a blank or has essentially no content.
Step3: Prep a dataframe from the RDD, naming columns appropriately.
Step4: Write out as Parquet. | Python Code:
!ls -lh eot2012_surt_index.cdx*
Explanation: Converting incoming CDX files to Parquet
Quick look at file sizes:
End of explanation
!gunzip eot2012_surt_index.cdx.gz
Explanation: Note: Spark can typically load *.gz files just fine, but that support comes from Hive integration, which seems to be missing here. So gunzip first.
End of explanation
eot2012 = sc.textFile("eot2012_surt_index.cdx") \
.filter(lambda line: line[0] != ' ') \
.filter(lambda line: len(line)>1) \
.map(lambda line: line.split(" ")) \
Explanation: Load in the unzipped file, filetering out any line that starts with a blank or has essentially no content.
End of explanation
df = sqlContext.createDataFrame(eot2012)
df = df.withColumnRenamed("_1", "surt_uri") \
.withColumnRenamed("_2", "capture_time") \
.withColumnRenamed("_3", "original_uri") \
.withColumnRenamed("_4", "mime_type") \
.withColumnRenamed("_5", "response_code") \
.withColumnRenamed("_6", "hash_sha1") \
.withColumnRenamed("_7", "redirect_url") \
.withColumnRenamed("_8", "meta_tags") \
.withColumnRenamed("_9", "length_compressed") \
.withColumnRenamed("_10", "warc_offset") \
.withColumnRenamed("_11", "warc_name") \
Explanation: Prep a dataframe from the RDD, naming columns appropriately.
End of explanation
df.write.parquet("eot2012.parquet")
!du -hs eot2012.parquet
Explanation: Write out as Parquet.
End of explanation |
9,087 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Guide To Encoding Categorical Values in Python
Supporting notebook for article on Practical Business Python.
Import the pandas, scikit-learn, numpy and category_encoder libraries.
Step1: Need to define the headers since the data does not contain any
Step2: Read in the data from the url, add headers and convert ? to nan values
Step3: Look at the data types contained in the dataframe
Step4: Create a copy of the data with only the object columns.
Step5: Check for null values in the data
Step6: Since the num_doors column contains the null values, look at what values are current options
Step7: We will fill in the doors value with the most common element - four.
Step8: Encoding values using pandas
Convert the num_cylinders and num_doors values to numbers
Step9: One approach to encoding labels is to convert the values to a pandas category
Step10: We can assign the category codes to a new column so we have a clean numeric representation
Step11: In order to do one hot encoding, use pandas get_dummies
Step12: get_dummiers has options for selecting the columns and adding prefixes to make the resulting data easier to understand.
Step13: Use np.where and the str accessor to do this in one efficient line
Step14: Encoding Values Using Scitkit-learn
Instantiate the LabelEncoder
Step15: To accomplish something similar to pandas get_dummies, use LabelBinarizer
Step16: The results are an array that needs to be converted to a DataFrame
Step17: Advanced Encoding
category_encoder library
Step18: Try out the Backward Difference Encoder on the engine_type column
Step19: Another approach is to use a polynomial encoding.
Step20: Scikit-learn pipeline
Show an example of how to incorporate the encoding strategies into a scikit-learn pipeline | Python Code:
import pandas as pd
import numpy as np
from sklearn.preprocessing import OrdinalEncoder, OneHotEncoder
from sklearn.compose import make_column_transformer
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.model_selection import cross_val_score
import category_encoders as ce
Explanation: Guide To Encoding Categorical Values in Python
Supporting notebook for article on Practical Business Python.
Import the pandas, scikit-learn, numpy and category_encoder libraries.
End of explanation
headers = ["symboling", "normalized_losses", "make", "fuel_type", "aspiration", "num_doors", "body_style",
"drive_wheels", "engine_location", "wheel_base", "length", "width", "height", "curb_weight",
"engine_type", "num_cylinders", "engine_size", "fuel_system", "bore", "stroke",
"compression_ratio", "horsepower", "peak_rpm", "city_mpg", "highway_mpg", "price"]
Explanation: Need to define the headers since the data does not contain any
End of explanation
df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data",
header=None, names=headers, na_values="?" )
df.head()
Explanation: Read in the data from the url, add headers and convert ? to nan values
End of explanation
df.dtypes
Explanation: Look at the data types contained in the dataframe
End of explanation
obj_df = df.select_dtypes(include=['object']).copy()
obj_df.head()
Explanation: Create a copy of the data with only the object columns.
End of explanation
obj_df[obj_df.isnull().any(axis=1)]
Explanation: Check for null values in the data
End of explanation
obj_df["num_doors"].value_counts()
Explanation: Since the num_doors column contains the null values, look at what values are current options
End of explanation
obj_df = obj_df.fillna({"num_doors": "four"})
obj_df[obj_df.isnull().any(axis=1)]
Explanation: We will fill in the doors value with the most common element - four.
End of explanation
obj_df["num_cylinders"].value_counts()
cleanup_nums = {"num_doors": {"four": 4, "two": 2},
"num_cylinders": {"four": 4, "six": 6, "five": 5, "eight": 8,
"two": 2, "twelve": 12, "three":3 }}
obj_df = obj_df.replace(cleanup_nums)
obj_df.head()
obj_df.dtypes
Explanation: Encoding values using pandas
Convert the num_cylinders and num_doors values to numbers
End of explanation
obj_df["body_style"].value_counts()
obj_df["body_style"] = obj_df["body_style"].astype('category')
obj_df.dtypes
Explanation: One approach to encoding labels is to convert the values to a pandas category
End of explanation
obj_df["body_style_cat"] = obj_df["body_style"].cat.codes
obj_df.head()
obj_df.dtypes
Explanation: We can assign the category codes to a new column so we have a clean numeric representation
End of explanation
pd.get_dummies(obj_df, columns=["drive_wheels"]).head()
Explanation: In order to do one hot encoding, use pandas get_dummies
End of explanation
pd.get_dummies(obj_df, columns=["body_style", "drive_wheels"], prefix=["body", "drive"]).head()
obj_df["engine_type"].value_counts()
Explanation: get_dummiers has options for selecting the columns and adding prefixes to make the resulting data easier to understand.
End of explanation
obj_df["OHC_Code"] = np.where(obj_df["engine_type"].str.contains("ohc"), 1, 0)
obj_df[["make", "engine_type", "OHC_Code"]].head(20)
Explanation: Use np.where and the str accessor to do this in one efficient line
End of explanation
ord_enc = OrdinalEncoder()
obj_df["make_code"] = ord_enc.fit_transform(obj_df[["make"]])
obj_df[["make", "make_code"]].head(11)
Explanation: Encoding Values Using Scitkit-learn
Instantiate the LabelEncoder
End of explanation
oe_style = OneHotEncoder()
oe_results = oe_style.fit_transform(obj_df[["body_style"]])
Explanation: To accomplish something similar to pandas get_dummies, use LabelBinarizer
End of explanation
oe_results.toarray()
pd.DataFrame(oe_results.toarray(), columns=oe_style.categories_).head()
Explanation: The results are an array that needs to be converted to a DataFrame
End of explanation
# Get a new clean dataframe
obj_df = df.select_dtypes(include=['object']).copy()
obj_df.head()
Explanation: Advanced Encoding
category_encoder library
End of explanation
# Specify the columns to encode then fit and transform
encoder = ce.BackwardDifferenceEncoder(cols=["engine_type"])
encoder.fit(obj_df, verbose=1)
encoder.fit_transform(obj_df).iloc[:,8:14].head()
Explanation: Try out the Backward Difference Encoder on the engine_type column
End of explanation
encoder = ce.polynomial.PolynomialEncoder(cols=["engine_type"])
encoder.fit_transform(obj_df, verbose=1).iloc[:,8:14].head()
Explanation: Another approach is to use a polynomial encoding.
End of explanation
# for the purposes of this analysis, only use a small subset of features
feature_cols = [
'fuel_type', 'make', 'aspiration', 'highway_mpg', 'city_mpg',
'curb_weight', 'drive_wheels'
]
# Remove the empty price rows
df_ml = df.dropna(subset=['price'])
X = df_ml[feature_cols]
y = df_ml['price']
column_trans = make_column_transformer((OneHotEncoder(handle_unknown='ignore'),
['fuel_type', 'make', 'drive_wheels']),
(OrdinalEncoder(), ['aspiration']),
remainder='passthrough')
linreg = LinearRegression()
pipe = make_pipeline(column_trans, linreg)
cross_val_score(pipe, X, y, cv=10, scoring='neg_mean_absolute_error')
# Get the average of the errors after 10 iterations
cross_val_score(pipe, X, y, cv=10, scoring='neg_mean_absolute_error').mean().round(2)
Explanation: Scikit-learn pipeline
Show an example of how to incorporate the encoding strategies into a scikit-learn pipeline
End of explanation |
9,088 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
What is the equivalent of R's ecdf(x)(x) function in Python, in either numpy or scipy? Is ecdf(x)(x) basically the same as: | Problem:
import numpy as np
grades = np.array((93.5,93,60.8,94.5,82,87.5,91.5,99.5,86,93.5,92.5,78,76,69,94.5,
89.5,92.8,78,65.5,98,98.5,92.3,95.5,76,91,95,61))
def ecdf_result(x):
xs = np.sort(x)
ys = np.arange(1, len(xs)+1)/float(len(xs))
return ys
result = ecdf_result(grades) |
9,089 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iterators and Generators Homework
Problem 1
Create a generator that generates the squares of numbers up to some number N.
Step1: Problem 2
Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note
Step2: Problem 3
Use the iter() function to convert the string below
Step3: Problem 4
Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement.
A generator, utilizing a yield statement, returns an iterator object. The iterator object will yield/return a value each time it is called upon to iterate through its code. So in cases where a return statement would be used to return the entirely of a list, the generator would only return the current iteration of the list, remembering its state where it was last yielded.
Extra Credit!
Can you explain what gencomp is in the code below? (Note | Python Code:
def gensquares(N):
for i in range(N):
yield i**2
for x in gensquares(10):
print x
Explanation: Iterators and Generators Homework
Problem 1
Create a generator that generates the squares of numbers up to some number N.
End of explanation
import random
random.randint(1,10)
def rand_num(low,high,n):
for i in range(n+1):
yield random.randint(low, high)
for num in rand_num(1,10,12):
print num
Explanation: Problem 2
Create a generator that yields "n" random numbers between a low and high number (that are inputs). Note: Use the random library. For example:
End of explanation
s = 'hello'
#code here
for letter in iter(s):
print letter
Explanation: Problem 3
Use the iter() function to convert the string below
End of explanation
my_list = [1,2,3,4,5]
gencomp = (item for item in my_list if item > 3)
for item in gencomp:
print item
Explanation: Problem 4
Explain a use case for a generator using a yield statement where you would not want to use a normal function with a return statement.
A generator, utilizing a yield statement, returns an iterator object. The iterator object will yield/return a value each time it is called upon to iterate through its code. So in cases where a return statement would be used to return the entirely of a list, the generator would only return the current iteration of the list, remembering its state where it was last yielded.
Extra Credit!
Can you explain what gencomp is in the code below? (Note: We never covered this in lecture! You will have to do some googling/Stack Overflowing!)
End of explanation |
9,090 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ritz method for a beam
November, 2018
We want to find a Ritz approximation of the deflection $w$ of a beam under applied
transverse uniform load of intensity $f$ per unit lenght and an end moment $M$.
This is described by the following boundary value problem.
$$
\frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right) = f\, ,\quad
0 < x < L,\quad EI>0\, ,
$$
with
$$
w(0) = w'(0) = 0,\quad
\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right){x=L} = M,\quad
\left[\frac{\mathrm{d}}{\mathrm{d}x}\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right)\right]{x=L} = 0\, .
$$
Step2: The exact solution for this problem is
$$w(x) = \left(\frac{2M + fL^2}{4EI}\right)x^2 - \frac{fL}{6EI}x^3 + \frac{f}{24EI}x^4\, .$$
Step3: Conventional formulation
We can transform the boundary value problem to
$$\frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(EI\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\right) = \hat{f}\, ,\quad 0 < x< L$$
with
$$u(0) = u'(0) = EI u''(L) = \left[\frac{\mathrm{d}}{\mathrm{d}x^2}(EI w'')\right]_{x=L} = 0\, ,$$
and
$$u = w - w_0, \hat{f} = f - \frac{\mathrm{d}}{\mathrm{d}x^2}(EI w_0'')$$
where $w_0$ satisfies the boundary conditions. For this case we can chose
$$w_0 = \frac{M x^2}{2EI}\, ,$$
that satisfies the boundary conditions. For this choice, we have $\hat{f} = f$.
The quadratic functional for this problem is
$$J[u] = \int\limits_0^L \left[EI\left(\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\right)^2 - fu\right]\mathrm{d}x\, ,$$
and the weak problem $B(v, u) = l(v)$, with
$$
B(v, u) = \int\limits_0^L EI\frac{\mathrm{d}^2 v}{\mathrm{d}x^2}\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\mathrm{d}x\, ,\quad
l(v) = \int\limits_0^L v f\mathrm{d}x\, .
$$
Step4: Lagrange multiplier formulation
We can write the problem as minimizing the functional
$$J(\psi, w) = \int\limits_0^L\left[\frac{EI}{2}\left(\frac{\mathrm{d} \psi}{\mathrm{d}x}\right)^2 -
f w\right]\mathrm{d}x + M\psi(L)\, ,$$
subject to
$$G(\psi, w) \equiv \psi + \frac{\mathrm{d}w}{\mathrm{d}x} = 0\, .$$
The Lagrangian is given by
$$L(\psi, w, \lambda) = \int\limits_0^L\left[\frac{EI}{2}\left(\frac{\mathrm{d} \psi}{\mathrm{d}x}\right)^2 -
f w\right]\mathrm{d}x + \int\limits_0^L \lambda\left(\psi + \frac{\mathrm{d}x}{\mathrm{d}x}\right)\mathrm{d}x + M\psi(L)\, , $$
where $\lambda$ is the Lagrange multiplier, which in this case represents the shear force.
Step5: The penalty function formulation
The augmented functional for this formulation is given by
$$P_K (\psi, w) = J(\psi, w) + \frac{K}{2}\int\limits_0^L \left(\psi + \frac{\mathrm{d}w}{\mathrm{d}x}\right)^2\mathrm{d}x\, ,$$
where $K$ is the penalty parameter.
Step6: Mixed formulation
The mixed formulation involves rewriting a given higher order equation as a pair of llower
order equations by introducing secondary dependent variables. The original equation can be
decomposed into
$$
\frac{M(x)}{EI} = \frac{\mathrm{d}^2 w}{\mathrm{d}x^2}\, ,\quad
\frac{\mathrm{d}^2M(x)}{\mathrm{d}x^2} = f\, ,\quad 0<x<L\, .
$$
The functional in this case is
$$
I(w, M) = \int\limits_0^L\left(\frac{\mathrm{d}w}{\mathrm{d}x}\frac{\mathrm{d}M}{\mathrm{d}x}
+ \frac{M^2}{2EI}+ fw\right)\mathrm{d}x
$$ | Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sympy import *
%matplotlib notebook
init_printing()
# Graphics setup
gray = '#757575'
plt.rcParams["mathtext.fontset"] = "cm"
plt.rcParams["text.color"] = gray
plt.rcParams["font.size"] = 12
plt.rcParams["xtick.color"] = gray
plt.rcParams["ytick.color"] = gray
plt.rcParams["axes.labelcolor"] = gray
plt.rcParams["axes.edgecolor"] = gray
plt.rcParams["axes.spines.right"] = False
plt.rcParams["axes.spines.top"] = False
plt.rcParams["figure.figsize"] = 4, 3
Explanation: Ritz method for a beam
November, 2018
We want to find a Ritz approximation of the deflection $w$ of a beam under applied
transverse uniform load of intensity $f$ per unit lenght and an end moment $M$.
This is described by the following boundary value problem.
$$
\frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right) = f\, ,\quad
0 < x < L,\quad EI>0\, ,
$$
with
$$
w(0) = w'(0) = 0,\quad
\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right){x=L} = M,\quad
\left[\frac{\mathrm{d}}{\mathrm{d}x}\left(EI \frac{\mathrm{d}^2w}{\mathrm{d}x^2}\right)\right]{x=L} = 0\, .
$$
End of explanation
x = symbols('x')
M, EI, f, L, Mb = symbols("M EI f L Mb")
w_exact = (2*M + f*L**2)/(4*EI)*x**2 - f*L/(6*EI)*x**3 + f/(24*EI)*x**4
psi_exact = -(2*M + f*L**2)/(2*EI)*x + f*L*x**2/(2*EI) - f*x**3/(6*EI)
M_exact = f/2*(x - L)**2 + Mb
lamda_exact = f*(L - x)
def plot_expr(expr, x, rango=(0, 1), ax=None, linestyle="solid"):
Plot SymPy expressions of a single variable
expr_num = lambdify(x, expr, "numpy")
x0 = rango[0]
x1 = rango[1]
x_num = np.linspace(0, 1, 101)
if ax is None:
plt.figure()
ax = plt.gca()
ax.plot(x_num, expr_num(x_num), linestyle=linestyle)
Explanation: The exact solution for this problem is
$$w(x) = \left(\frac{2M + fL^2}{4EI}\right)x^2 - \frac{fL}{6EI}x^3 + \frac{f}{24EI}x^4\, .$$
End of explanation
def quad_fun(x, u, M, EI, f, L):
F = EI/2*diff(u, x, 2)**2 - f*u
L = integrate(F, (x, 0, L))
return L
def ritz_conventional(x, M, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
u = sum(a[k]*x**(k + 2) for k in range(nterms))
M, EI, f, L = symbols("M EI f L")
L = quad_fun(x, u, M, EI, f, L)
eqs = [L.diff(C) for C in a]
sol = solve(eqs, a)
return u.subs(sol)
w0 = M*x**2/(2*EI)
subs = {L: 1, EI:1, M:1, f: 1}
errors_conv = []
for nterms in range(1, 4):
u = ritz_conventional(x, M, EI, f, L, nterms)
w = u + w0
err = integrate((w - w_exact)**2, (x, 0, L))
norm = integrate(w_exact**2, (x, 0, L))
errors_conv.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(psi_exact.subs(subs), x, ax=ax)
plot_expr(-w.diff(x).subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz"]);
Explanation: Conventional formulation
We can transform the boundary value problem to
$$\frac{\mathrm{d}^2}{\mathrm{d}x^2}\left(EI\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\right) = \hat{f}\, ,\quad 0 < x< L$$
with
$$u(0) = u'(0) = EI u''(L) = \left[\frac{\mathrm{d}}{\mathrm{d}x^2}(EI w'')\right]_{x=L} = 0\, ,$$
and
$$u = w - w_0, \hat{f} = f - \frac{\mathrm{d}}{\mathrm{d}x^2}(EI w_0'')$$
where $w_0$ satisfies the boundary conditions. For this case we can chose
$$w_0 = \frac{M x^2}{2EI}\, ,$$
that satisfies the boundary conditions. For this choice, we have $\hat{f} = f$.
The quadratic functional for this problem is
$$J[u] = \int\limits_0^L \left[EI\left(\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\right)^2 - fu\right]\mathrm{d}x\, ,$$
and the weak problem $B(v, u) = l(v)$, with
$$
B(v, u) = \int\limits_0^L EI\frac{\mathrm{d}^2 v}{\mathrm{d}x^2}\frac{\mathrm{d}^2 u}{\mathrm{d}x^2}\mathrm{d}x\, ,\quad
l(v) = \int\limits_0^L v f\mathrm{d}x\, .
$$
End of explanation
errors_conv
def lagran(x, psi, w, lamda, M, EI, f, L):
F = EI/2*diff(psi, x)**2 - f*w
G = lamda*(psi + diff(w, x))
L = integrate(F, (x, 0, L)) + integrate(G, (x, 0, L)) + M*psi.subs(x, L)
return L
def ritz_multiplier(x, M, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
b = symbols("b0:%i"%(nterms))
c = symbols("c0:%i"%(nterms))
var = a + b + c
psi = sum(a[k]*x**(k + 1) for k in range(nterms))
w = sum(b[k]*x**(k + 1) for k in range(nterms))
lamda = sum(c[k]*x**k for k in range(nterms))
M, EI, f, L = symbols("M EI f L")
L = lagran(x, psi, w, lamda, M, EI, f, L)
eqs = [L.diff(C) for C in var]
sol = solve(eqs, var)
return w.subs(sol), psi.subs(sol), lamda.subs(sol)
subs = {L: 1, EI:1, M:1, f: 1}
errors_mult = []
for nterms in range(1, 4):
w, psi, lamda = ritz_multiplier(x, M, EI, f, L, nterms)
err = (integrate((w - w_exact)**2, (x, 0, L)) +
integrate((psi - psi_exact)**2, (x, 0, L)) +
integrate((lamda - lamda_exact)**2, (x, 0, L)))
norm = (integrate(w_exact**2, (x, 0, L)) +
integrate(psi_exact**2, (x, 0, L)) +
integrate(lamda_exact**2, (x, 0, L)))
errors_mult.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(psi_exact.subs(subs), x, ax=ax)
plot_expr(psi.subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz with multipliers"]);
errors_mult
Explanation: Lagrange multiplier formulation
We can write the problem as minimizing the functional
$$J(\psi, w) = \int\limits_0^L\left[\frac{EI}{2}\left(\frac{\mathrm{d} \psi}{\mathrm{d}x}\right)^2 -
f w\right]\mathrm{d}x + M\psi(L)\, ,$$
subject to
$$G(\psi, w) \equiv \psi + \frac{\mathrm{d}w}{\mathrm{d}x} = 0\, .$$
The Lagrangian is given by
$$L(\psi, w, \lambda) = \int\limits_0^L\left[\frac{EI}{2}\left(\frac{\mathrm{d} \psi}{\mathrm{d}x}\right)^2 -
f w\right]\mathrm{d}x + \int\limits_0^L \lambda\left(\psi + \frac{\mathrm{d}x}{\mathrm{d}x}\right)\mathrm{d}x + M\psi(L)\, , $$
where $\lambda$ is the Lagrange multiplier, which in this case represents the shear force.
End of explanation
def augmented(x, psi, w, K, M, EI, f, L):
F = EI/2*diff(psi, x)**2 - f*w
G = (psi + diff(w, x))
P = integrate(F, (x, 0, L)) + K/2*integrate(G**2, (x, 0, L)) + M*psi.subs(x, L)
return P
def ritz_penalty(x, K, M, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
b = symbols("b0:%i"%(nterms))
var = a + b
w = sum(a[k]*x**(k + 1) for k in range(nterms))
psi = sum(b[k]*x**(k + 1) for k in range(nterms))
M, EI, f, L = symbols("M EI f L")
P = augmented(x, psi, w, K, M, EI, f, L)
eqs = [P.diff(C) for C in var]
sol = solve(eqs, var)
return w.subs(sol), psi.subs(sol)
K = symbols("K")
errors_penalty = []
for K_val in [1, 10, 100]:
subs = {L: 1, EI:1, M:1, f: 1, K: K_val}
w, psi = ritz_penalty(x, K, M, EI, f, L, 2)
err = (integrate((w - w_exact)**2, (x, 0, L)) +
integrate((psi - psi_exact)**2, (x, 0, L)) +
integrate((lamda - lamda_exact)**2, (x, 0, L)))
norm = (integrate(w_exact**2, (x, 0, L)) +
integrate(psi_exact**2, (x, 0, L)) +
integrate(lamda_exact**2, (x, 0, L)))
errors_penalty.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(psi_exact.subs(subs), x, ax=ax)
plot_expr(psi.subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz with penalty"]);
errors_penalty
Explanation: The penalty function formulation
The augmented functional for this formulation is given by
$$P_K (\psi, w) = J(\psi, w) + \frac{K}{2}\int\limits_0^L \left(\psi + \frac{\mathrm{d}w}{\mathrm{d}x}\right)^2\mathrm{d}x\, ,$$
where $K$ is the penalty parameter.
End of explanation
def mixed_fun(x, w, M, EI, f, L):
F = diff(w, x)*diff(M, x) + M**2/(2*EI) + f*w
L = integrate(F, (x, 0, L))
return L
def ritz_mixed(x, Mb, EI, f, L, nterms):
a = symbols("a0:%i"%(nterms))
b = symbols("b0:%i"%(nterms))
var = a + b
w = sum(a[k]*x**(k + 1) for k in range(nterms))
M = Mb + sum(b[k]*(x - L)**(k + 1) for k in range(nterms))
EI, f, L = symbols("EI f L")
L = mixed_fun(x, w, M, EI, f, L)
eqs = [L.diff(C) for C in var]
sol = solve(eqs, var)
return w.subs(sol), M.subs(sol)
subs = {L: 1, EI:1, f: 1, M:1, Mb:1}
Mb = 1
errors_mix = []
for nterms in range(1, 5):
w, Ms = ritz_mixed(x, Mb, EI, f, L, nterms)
err = integrate((w - w_exact)**2, (x, 0, L))
norm = integrate(w_exact**2, (x, 0, L))
errors_mix.append(N(sqrt((err/norm).subs(subs))))
plt.figure(figsize=(8, 3))
ax = plt.subplot(121)
plot_expr(w_exact.subs(subs), x, ax=ax)
plot_expr(w.subs(subs), x, ax=ax, linestyle="dashed")
ax = plt.subplot(122)
plot_expr(M_exact.subs(subs), x, ax=ax)
plot_expr(Ms.subs(subs), x, ax=ax, linestyle="dashed")
plt.legend(["Exact", "Ritz mixed"]);
Explanation: Mixed formulation
The mixed formulation involves rewriting a given higher order equation as a pair of llower
order equations by introducing secondary dependent variables. The original equation can be
decomposed into
$$
\frac{M(x)}{EI} = \frac{\mathrm{d}^2 w}{\mathrm{d}x^2}\, ,\quad
\frac{\mathrm{d}^2M(x)}{\mathrm{d}x^2} = f\, ,\quad 0<x<L\, .
$$
The functional in this case is
$$
I(w, M) = \int\limits_0^L\left(\frac{\mathrm{d}w}{\mathrm{d}x}\frac{\mathrm{d}M}{\mathrm{d}x}
+ \frac{M^2}{2EI}+ fw\right)\mathrm{d}x
$$
End of explanation |
9,091 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chapter 4 – Training Linear Models
This notebook contains all the sample code and solutions to the exercises in chapter 4.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures
Step1: Linear regression using the Normal Equation
Step2: The figure in the book actually corresponds to the following code, with a legend and axis labels
Step3: Linear regression using batch gradient descent
Step4: Stochastic Gradient Descent
Step5: Mini-batch gradient descent
Step6: Polynomial regression
Step7: Regularized models
Step8: Logistic regression
Step9: The figure in the book actually is actually a bit fancier
Step10: Exercise solutions
1. to 11.
See appendix A.
12. Batch Gradient Descent with early stopping for Softmax Regression
(without using Scikit-Learn)
Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier.
Step11: We need to add the bias term for every instance ($x_0 = 1$)
Step12: And let's set the random seed so the output of this exercise solution is reproducible
Step13: The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's train_test_split() function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation
Step14: The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance
Step15: Let's test this function on the first 10 instances
Step16: Looks good, so let's create the target class probabilities matrix for the training set and the test set
Step17: Now let's implement the Softmax function. Recall that it is defined by the following equation
Step18: We are almost ready to start training. Let's define the number of inputs and outputs
Step19: Now here comes the hardest part
Step20: And that's it! The Softmax model is trained. Let's look at the model parameters
Step21: Let's make predictions for the validation set and check the accuracy score
Step22: Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of Theta since this corresponds to the bias term). Also, let's try increasing the learning rate eta.
Step23: Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out
Step24: Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant.
Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing.
Step25: Still perfect, but faster.
Now let's plot the model's predictions on the whole dataset
Step26: And now let's measure the final model's accuracy on the test set | Python Code:
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "training_linear_models"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
Explanation: Chapter 4 – Training Linear Models
This notebook contains all the sample code and solutions to the exercises in chapter 4.
Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
End of explanation
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show()
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta_best
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
plt.plot(X_new, y_predict, "r-")
plt.plot(X, y, "b.")
plt.axis([0, 2, 0, 15])
plt.show()
Explanation: Linear regression using the Normal Equation
End of explanation
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
save_fig("linear_model_predictions")
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
Explanation: The figure in the book actually corresponds to the following code, with a legend and axis labels:
End of explanation
eta = 0.1
n_iterations = 1000
m = 100
theta = np.random.randn(2,1)
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
theta
X_new_b.dot(theta)
theta_path_bgd = []
def plot_gradient_descent(theta, eta, theta_path=None):
m = len(X_b)
plt.plot(X, y, "b.")
n_iterations = 1000
for iteration in range(n_iterations):
if iteration < 10:
y_predict = X_new_b.dot(theta)
style = "b-" if iteration > 0 else "r--"
plt.plot(X_new, y_predict, style)
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
if theta_path is not None:
theta_path.append(theta)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 2, 0, 15])
plt.title(r"$\eta = {}$".format(eta), fontsize=16)
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show()
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show()
Explanation: Linear regression using batch gradient descent
End of explanation
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
if epoch == 0 and i < 20: # not shown in the book
y_predict = X_new_b.dot(theta) # not shown
style = "b-" if i > 0 else "r--" # not shown
plt.plot(X_new, y_predict, style) # not shown
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta_path_sgd.append(theta) # not shown
plt.plot(X, y, "b.") # not shown
plt.xlabel("$x_1$", fontsize=18) # not shown
plt.ylabel("$y$", rotation=0, fontsize=18) # not shown
plt.axis([0, 2, 0, 15]) # not shown
save_fig("sgd_plot") # not shown
plt.show() # not shown
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(n_iter=50, penalty=None, eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
Explanation: Stochastic Gradient Descent
End of explanation
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 10, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
theta_path_bgd = np.array(theta_path_bgd)
theta_path_sgd = np.array(theta_path_sgd)
theta_path_mgd = np.array(theta_path_mgd)
plt.figure(figsize=(7,4))
plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic")
plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch")
plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch")
plt.legend(loc="upper left", fontsize=16)
plt.xlabel(r"$\theta_0$", fontsize=20)
plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0)
plt.axis([2.5, 4.5, 2.3, 3.9])
save_fig("gradient_descent_paths_plot")
plt.show()
Explanation: Mini-batch gradient descent
End of explanation
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline([
("poly_features", polybig_features),
("std_scaler", std_scaler),
("lin_reg", lin_reg),
])
polynomial_regression.fit(X, y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("high_degree_polynomials_plot")
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train_predict, y_train[:m]))
val_errors.append(mean_squared_error(y_val_predict, y_val))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14) # not shown in the book
plt.xlabel("Training set size", fontsize=14) # not shown
plt.ylabel("RMSE", fontsize=14) # not shown
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 3]) # not shown in the book
save_fig("underfitting_learning_curves_plot") # not shown
plt.show() # not shown
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
])
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 3]) # not shown
save_fig("learning_curves_plot") # not shown
plt.show() # not shown
Explanation: Polynomial regression
End of explanation
from sklearn.linear_model import Ridge
np.random.seed(42)
m = 20
X = 3 * np.random.rand(m, 1)
y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5
X_new = np.linspace(0, 3, 100).reshape(100, 1)
def plot_model(model_class, polynomial, alphas, **model_kargs):
for alpha, style in zip(alphas, ("b-", "g--", "r:")):
model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression()
if polynomial:
model = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("std_scaler", StandardScaler()),
("regul_reg", model),
])
model.fit(X, y)
y_new_regul = model.predict(X_new)
lw = 2 if alpha > 0 else 1
plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha))
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)
save_fig("ridge_regression_plot")
plt.show()
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
sgd_reg = SGDRegressor(penalty="l2", random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.predict([[1.5]])
ridge_reg = Ridge(alpha=1, solver="sag", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
from sklearn.linear_model import Lasso
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42)
save_fig("lasso_regression_plot")
plt.show()
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
])
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
sgd_reg = SGDRegressor(n_iter=1,
penalty=None,
eta0=0.0005,
warm_start=True,
learning_rate="constant",
random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train_predict, y_train))
val_errors.append(mean_squared_error(y_val_predict, y_val))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
from sklearn.base import clone
sgd_reg = SGDRegressor(n_iter=1, warm_start=True, penalty=None,
learning_rate="constant", eta0=0.0005, random_state=42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
val_error = mean_squared_error(y_val_predict, y_val)
if val_error < minimum_val_error:
minimum_val_error = val_error
best_epoch = epoch
best_model = clone(sgd_reg)
best_epoch, best_model
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
# ignoring bias term
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
plt.figure(figsize=(12, 8))
for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")):
JR = J + l1 * N1 + l2 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
plt.subplot(221 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9)
plt.contour(t1, t2, N, levels=levelsN)
plt.plot(path_J[:, 0], path_J[:, 1], "w-o")
plt.plot(path_N[:, 0], path_N[:, 1], "y-^")
plt.plot(t1_min, t2_min, "rs")
plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
plt.subplot(222 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
plt.plot(t1r_min, t2r_min, "rs")
plt.title(title, fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
for subplot in (221, 223):
plt.subplot(subplot)
plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0)
for subplot in (223, 224):
plt.subplot(subplot)
plt.xlabel(r"$\theta_1$", fontsize=20)
save_fig("lasso_vs_ridge_plot")
plt.show()
Explanation: Regularized models
End of explanation
t = np.linspace(-10, 10, 100)
sig = 1 / (1 + np.exp(-t))
plt.figure(figsize=(9, 3))
plt.plot([-10, 10], [0, 0], "k-")
plt.plot([-10, 10], [0.5, 0.5], "k:")
plt.plot([-10, 10], [1, 1], "k:")
plt.plot([0, 0], [-1.1, 1.1], "k-")
plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$")
plt.xlabel("t")
plt.legend(loc="upper left", fontsize=20)
plt.axis([-10, 10, -0.1, 1.1])
save_fig("logistic_function_plot")
plt.show()
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
print(iris.DESCR)
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(random_state=42)
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
Explanation: Logistic regression
End of explanation
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
decision_boundary = X_new[y_proba[:, 1] >= 0.5][0]
plt.figure(figsize=(8, 3))
plt.plot(X[y==0], y[y==0], "bs")
plt.plot(X[y==1], y[y==1], "g^")
plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center")
plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')
plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')
plt.xlabel("Petal width (cm)", fontsize=14)
plt.ylabel("Probability", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 3, -0.02, 1.02])
save_fig("logistic_regression_plot")
plt.show()
decision_boundary
log_reg.predict([[1.7], [1.5]])
from sklearn.linear_model import LogisticRegression
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.int)
log_reg = LogisticRegression(C=10**10, random_state=42)
log_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(2.9, 7, 500).reshape(-1, 1),
np.linspace(0.8, 2.7, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs")
plt.plot(X[y==1, 0], X[y==1, 1], "g^")
zz = y_proba[:, 1].reshape(x0.shape)
contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)
left_right = np.array([2.9, 7])
boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(left_right, boundary, "k--", linewidth=3)
plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center")
plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.axis([2.9, 7, 0.8, 2.7])
save_fig("logistic_regression_contour_plot")
plt.show()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42)
softmax_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = softmax_reg.predict_proba(X_new)
y_predict = softmax_reg.predict(X_new)
zz1 = y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
save_fig("softmax_regression_contour_plot")
plt.show()
softmax_reg.predict([[5, 2]])
softmax_reg.predict_proba([[5, 2]])
Explanation: The figure in the book actually is actually a bit fancier:
End of explanation
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
Explanation: Exercise solutions
1. to 11.
See appendix A.
12. Batch Gradient Descent with early stopping for Softmax Regression
(without using Scikit-Learn)
Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier.
End of explanation
X_with_bias = np.c_[np.ones([len(X), 1]), X]
Explanation: We need to add the bias term for every instance ($x_0 = 1$):
End of explanation
np.random.seed(2042)
Explanation: And let's set the random seed so the output of this exercise solution is reproducible:
End of explanation
test_ratio = 0.2
validation_ratio = 0.2
total_size = len(X_with_bias)
test_size = int(total_size * test_ratio)
validation_size = int(total_size * validation_ratio)
train_size = total_size - test_size - validation_size
rnd_indices = np.random.permutation(total_size)
X_train = X_with_bias[rnd_indices[:train_size]]
y_train = y[rnd_indices[:train_size]]
X_valid = X_with_bias[rnd_indices[train_size:-test_size]]
y_valid = y[rnd_indices[train_size:-test_size]]
X_test = X_with_bias[rnd_indices[-test_size:]]
y_test = y[rnd_indices[-test_size:]]
Explanation: The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's train_test_split() function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation:
End of explanation
def to_one_hot(y):
n_classes = y.max() + 1
m = len(y)
Y_one_hot = np.zeros((m, n_classes))
Y_one_hot[np.arange(m), y] = 1
return Y_one_hot
Explanation: The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance:
End of explanation
y_train[:10]
to_one_hot(y_train[:10])
Explanation: Let's test this function on the first 10 instances:
End of explanation
Y_train_one_hot = to_one_hot(y_train)
Y_valid_one_hot = to_one_hot(y_valid)
Y_test_one_hot = to_one_hot(y_test)
Explanation: Looks good, so let's create the target class probabilities matrix for the training set and the test set:
End of explanation
def softmax(logits):
exps = np.exp(logits)
exp_sums = np.sum(exps, axis=1, keepdims=True)
return exps / exp_sums
Explanation: Now let's implement the Softmax function. Recall that it is defined by the following equation:
$\sigma\left(\mathbf{s}(\mathbf{x})\right)k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$
End of explanation
n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term)
n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes)
Explanation: We are almost ready to start training. Let's define the number of inputs and outputs:
End of explanation
eta = 0.01
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error)
Theta = Theta - eta * gradients
Explanation: Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.
So the equations we will need are the cost function:
$J(\mathbf{\Theta}) =
- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$
And the equation for the gradients:
$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$
Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting nan values.
End of explanation
Theta
Explanation: And that's it! The Softmax model is trained. Let's look at the model parameters:
End of explanation
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
Explanation: Let's make predictions for the validation set and check the accuracy score:
End of explanation
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
Explanation: Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of Theta since this corresponds to the bias term). Also, let's try increasing the learning rate eta.
End of explanation
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
Explanation: Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out:
End of explanation
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
best_loss = np.infty
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_inputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
if iteration % 500 == 0:
print(iteration, loss)
if loss < best_loss:
best_loss = loss
else:
print(iteration - 1, best_loss)
print(iteration, loss, "early stopping!")
break
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
Explanation: Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant.
Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing.
End of explanation
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new]
logits = X_new_with_bias.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
zz1 = Y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap, linewidth=5)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
plt.show()
Explanation: Still perfect, but faster.
Now let's plot the model's predictions on the whole dataset:
End of explanation
logits = X_test.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_test)
accuracy_score
Explanation: And now let's measure the final model's accuracy on the test set:
End of explanation |
9,092 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autoencoder + UMAP
This notebook extends the last notebook to train the embedding jointly on the reconstruction loss, and UMAP loss, resulting in slightly better reconstructions, and a slightly modified UMAP embedding.
load data
Step1: define the encoder network
Step2: create parametric umap model
Step3: plot reconstructions
Step4: plot results
Step5: plotting loss | Python Code:
from tensorflow.keras.datasets import mnist
(train_images, Y_train), (test_images, Y_test) = mnist.load_data()
train_images = train_images.reshape((train_images.shape[0], -1))/255.
test_images = test_images.reshape((test_images.shape[0], -1))/255.
Explanation: Autoencoder + UMAP
This notebook extends the last notebook to train the embedding jointly on the reconstruction loss, and UMAP loss, resulting in slightly better reconstructions, and a slightly modified UMAP embedding.
load data
End of explanation
import tensorflow as tf
dims = (28,28, 1)
n_components = 2
encoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=dims),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation="relu", padding="same"
),
tf.keras.layers.Conv2D(
filters=128, kernel_size=3, strides=(2, 2), activation="relu", padding="same"
),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=512, activation="relu"),
tf.keras.layers.Dense(units=512, activation="relu"),
tf.keras.layers.Dense(units=n_components),
])
encoder.summary()
decoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(n_components)),
tf.keras.layers.Dense(units=512, activation="relu"),
tf.keras.layers.Dense(units=7 * 7 * 256, activation="relu"),
tf.keras.layers.Reshape(target_shape=(7, 7, 256)),
tf.keras.layers.Conv2DTranspose(
filters=128, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
),
tf.keras.layers.Conv2DTranspose(
filters=64, kernel_size=3, strides=(2, 2), padding="SAME", activation="relu"
),
tf.keras.layers.Conv2DTranspose(
filters=1, kernel_size=3, strides=(1, 1), padding="SAME", activation="sigmoid"
)
])
decoder.summary()
Explanation: define the encoder network
End of explanation
from umap.parametric_umap import ParametricUMAP
embedder = ParametricUMAP(
encoder=encoder,
decoder=decoder,
dims=dims,
n_training_epochs=1,
n_components=n_components,
parametric_reconstruction= True,
autoencoder_loss = True,
reconstruction_validation=test_images,
verbose=True,
)
embedding = embedder.fit_transform(train_images)
Explanation: create parametric umap model
End of explanation
import numpy as np
test_images_recon = embedder.inverse_transform(embedder.transform(test_images))
nex = 10
fig, axs = plt.subplots(ncols=10, nrows=2, figsize=(nex, 2))
for i in range(nex):
axs[0, i].matshow(np.squeeze(test_images[i].reshape(28, 28, 1)), cmap=plt.cm.Greys)
axs[1, i].matshow(
tf.nn.sigmoid(np.squeeze(test_images_recon[i].reshape(28, 28, 1))),
cmap=plt.cm.Greys,
)
for ax in axs.flatten():
ax.axis("off")
Explanation: plot reconstructions
End of explanation
embedding = embedder.embedding_
import matplotlib.pyplot as plt
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
embedding[:, 0],
embedding[:, 1],
c=Y_train.astype(int),
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
Explanation: plot results
End of explanation
embedder._history.keys()
fig, axs = plt.subplots(ncols=2, figsize=(10,5))
ax = axs[0]
ax.plot(embedder._history['loss'])
ax.set_ylabel('Cross Entropy')
ax.set_xlabel('Epoch')
ax = axs[1]
ax.plot(embedder._history['reconstruction_loss'], label='train')
ax.plot(embedder._history['val_reconstruction_loss'], label='valid')
ax.legend()
ax.set_ylabel('Cross Entropy')
ax.set_xlabel('Epoch')
Explanation: plotting loss
End of explanation |
9,093 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image embeddings in BigQuery for image similarity and clustering tasks
This notebook shows how to do use a pre-trained embedding as a vector representation of an image in Google Cloud Storage.
Given this embedding, we can load it as a BQ-ML model and then carry out document similarity or clustering.
This notebook accompanies the following Medium blog post
Step1: Embedding model for images
We're going to use the EfficientNets model trained on ImageNet. It is compact and trained on a large variety of real-world images.
Step2: The model on TensorFlow Hub expects images of a certain size, and provided as normalized arrays.
So, we'll define a serving function that carries out the necessary reading and preprocessing of the images.
Step3: Loading model into BigQuery
Since we saved the model in SavedModel format into GCS it is straightforward to load it into BigQuery
Let's load the model into a BigQuery dataset named advdata (create it if necessary)
Step4: From the BigQuery web console, click on "schema" tab for the newly loaded model. You will see that the input is a string called filename and the output is called output_0. The model is computationally expensive. | Python Code:
BUCKET='ai-analytics-solutions-kfpdemo' # CHANGE to a bucket you own
Explanation: Image embeddings in BigQuery for image similarity and clustering tasks
This notebook shows how to do use a pre-trained embedding as a vector representation of an image in Google Cloud Storage.
Given this embedding, we can load it as a BQ-ML model and then carry out document similarity or clustering.
This notebook accompanies the following Medium blog post:
End of explanation
import tensorflow as tf
import tensorflow_hub as tfhub
import os
model = tf.keras.Sequential()
model.add(tf.keras.Input(shape=[None,None,3]))
model.add(tfhub.KerasLayer("https://tfhub.dev/google/efficientnet/b4/feature-vector/1", name='image_embeddings'))
model.summary()
Explanation: Embedding model for images
We're going to use the EfficientNets model trained on ImageNet. It is compact and trained on a large variety of real-world images.
End of explanation
@tf.function(input_signature=[tf.TensorSpec(shape=[None], dtype=tf.string)])
def serve(filename):
img = tf.io.read_file(filename[0])
img = tf.io.decode_image(img, channels=3)
img = tf.cast(img, tf.float32) / 255.0
#img = tf.image.resize(img, [380, 380])
return model(img)
path='gs://{}/effnet_image_embedding'.format(BUCKET)
tf.saved_model.save(model, path, signatures={'serving_default': serve})
!saved_model_cli show --all --dir gs://$BUCKET/effnet_image_embedding
Explanation: The model on TensorFlow Hub expects images of a certain size, and provided as normalized arrays.
So, we'll define a serving function that carries out the necessary reading and preprocessing of the images.
End of explanation
%%bigquery
CREATE OR REPLACE MODEL advdata.effnet_image_embed
OPTIONS(model_type='tensorflow', model_path='gs://ai-analytics-solutions-kfpdemo/effnet_image_embedding/*')
Explanation: Loading model into BigQuery
Since we saved the model in SavedModel format into GCS it is straightforward to load it into BigQuery
Let's load the model into a BigQuery dataset named advdata (create it if necessary)
End of explanation
%%bigquery
SELECT output_0 FROM
ML.PREDICT(MODEL advdata.effnet_image_embed,(
SELECT 'gs://gcs-public-data--met/634108/0.jpg' AS filename))
Explanation: From the BigQuery web console, click on "schema" tab for the newly loaded model. You will see that the input is a string called filename and the output is called output_0. The model is computationally expensive.
End of explanation |
9,094 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Forte Tutorial 1.02
Step2: First we will run psi4 using the function forte.utils.psi4_scf
Step3: Reading options
Step4: Setting the molecular orbital spaces
Step5: Building a ForteIntegral object to read integrals from psi4
In Forte there are two classes responsible for handling integrals
Step6: Creating determinants
Objects that represent determinants are represented by the class Determinant. Here we create an empty determinant and print it by invoking the str function. This function prints the entire determinant (which has fixed size), and so if we are working with only a few orbitals we can specify how many we want to print
Step7: We modify the determinant by applying to it a creation operator $\hat{a}^\dagger_1$ that adds one electron in the spin orbital $\phi_{i,\alpha}$ using the function (create_alfa_bit). This function returns the corresponding sign
Step8: Here we create an electron in orbital 2
Step9: Similarly, we can remove (annihilate) an electron with the command destroy_alfa_bit (destroy_beta_bit for the beta case)
Step10: Creating the HF determinant
Next we do some bookeeping to find out the occupation of the Hartree-Fock determinant using the occupation returned to us by psi4
Step11: We can now compute the energy of the determinant as $\langle \Phi | \hat{H} | \Phi \rangle$ using the slater_rules function in the ActiveSpaceIntegrals class
Step12: Creating the FCI determinant basis
Next we enumerate the FCI determinants. Here we use symmetry information and generate only those determinants that have the desired symmetry. We do it in a wasteful way, because we simply generate all the combinations of alpha/beta electrons and then check for the symmetry of the determinant.
Step13: Diagonalize the Hamiltonian in the FCI space
In the last step, we diagonalize the Hamiltonian in the FCI determinant basis. We use the function slater_rules from the ActiveSpaceIntegrals class, which implements Slater rules to compute the matrix elements $\langle \Phi_I | \hat{H} | \Phi_J \rangle$. | Python Code:
import psi4
import forte
import forte.utils
Explanation: Forte Tutorial 1.02: Forte's determinant class
In this tutorial we are going to explore how to create a simple FCI code using forte's Python API.
Import modules
Here we import forte.utils bto access functions to directly run an SCF computation in psi4.
End of explanation
# setup xyz geometry
geom =
O
H 1 1.0
H 1 1.0 2 180.0
(E_scf, wfn) = forte.utils.psi4_scf(geom,basis='sto-3g',reference='rhf')
print(f'SCF Energy = {E_scf}')
Explanation: First we will run psi4 using the function forte.utils.psi4_scf
End of explanation
from forte import forte_options
options = psi4.core.get_options() # options = psi4 option object
options.set_current_module('FORTE') # read options labeled 'FORTE'
forte_options.get_options_from_psi4(options)
Explanation: Reading options
End of explanation
# Setup forte and prepare the active space integral class
mos_spaces = {'FROZEN_DOCC' : [1,0,0,0,0,0,0,0], # freeze the O 1s orbital
'RESTRICTED_DOCC' : [1,0,0,0,0,1,0,0]}
nmopi = wfn.nmopi()
point_group = wfn.molecule().point_group().symbol()
mo_space_info = forte.make_mo_space_info_from_map(nmopi,point_group,mos_spaces,[])
mo_space_info.size('ACTIVE')
Explanation: Setting the molecular orbital spaces
End of explanation
ints = forte.make_ints_from_psi4(wfn, forte_options, mo_space_info)
print(f'Number of molecular orbitals: {ints.nmo()}')
print(f'Number of correlated molecular orbitals: {ints.ncmo()}')
# the space that defines the active orbitals. We select only the 'ACTIVE' part
active_space = 'ACTIVE'
# the space(s) with non-active doubly occupied orbitals
core_spaces = ['RESTRICTED_DOCC']
as_ints = forte.make_active_space_ints(mo_space_info, ints, active_space, core_spaces)
print(f'Frozen-core energy = {as_ints.frozen_core_energy()}')
print(f'Nuclear repulsion energy = {as_ints.nuclear_repulsion_energy()}')
print(f'Scalar energy = {as_ints.scalar_energy()}')
Explanation: Building a ForteIntegral object to read integrals from psi4
In Forte there are two classes responsible for handling integrals:
- ForteIntegral: reads the integrals from psi4 and stores them in varios formats (conventional, density fitting, Cholesky, ...).
- ActiveSpaceIntegrals: stores a copy of all integrals and it is used by active space methods. This class only stores a subset of the integrals and includes an effective potential due to non-active doubly occupied orbitals.
We will first build the ForteIntegral object via the function make_forte_integrals
End of explanation
d = forte.Determinant()
print(f'Determinant: {d}')
nact = mo_space_info.size('ACTIVE')
print(f'Determinant: {d.str(nact)}')
Explanation: Creating determinants
Objects that represent determinants are represented by the class Determinant. Here we create an empty determinant and print it by invoking the str function. This function prints the entire determinant (which has fixed size), and so if we are working with only a few orbitals we can specify how many we want to print
End of explanation
sign = d.create_alfa_bit(1)
print(f'Determinant: {d.str(nact)}, sign = {sign}')
Explanation: We modify the determinant by applying to it a creation operator $\hat{a}^\dagger_1$ that adds one electron in the spin orbital $\phi_{i,\alpha}$ using the function (create_alfa_bit). This function returns the corresponding sign
End of explanation
sign = d.create_alfa_bit(2)
print(f'Determinant: {d.str(nact)}, sign = {sign}')
Explanation: Here we create an electron in orbital 2
End of explanation
sign = d.destroy_alfa_bit(2)
print(f'Determinant: {d.str(nact)}, sign = {sign}')
Explanation: Similarly, we can remove (annihilate) an electron with the command destroy_alfa_bit (destroy_beta_bit for the beta case)
End of explanation
nirrep = mo_space_info.nirrep()
nactpi = mo_space_info.dimension('ACTIVE').to_tuple()
# compute the number of alpha electrons per irrep
nact_aelpi = wfn.nalphapi() - mo_space_info.dimension('FROZEN_DOCC') - mo_space_info.dimension('RESTRICTED_DOCC')
nact_aelpi = nact_aelpi.to_tuple()
# compute the number of beta electrons per irrep
nact_belpi = wfn.nbetapi() - mo_space_info.dimension('FROZEN_DOCC') - mo_space_info.dimension('RESTRICTED_DOCC')
nact_belpi = nact_belpi.to_tuple()
print(f'Number of alpha electrons per irrep: {nact_aelpi}')
print(f'Number of beta electrons per irrep: {nact_belpi}')
print(f'Number of active orbtials per irrep: {nactpi}')
ref = forte.Determinant()
# we loop over each irrep and fill the occupied orbitals
irrep_start = [sum(nactpi[:h]) for h in range(nirrep)]
for h in range(nirrep):
for i in range(nact_aelpi[h]): ref.set_alfa_bit(irrep_start[h] + i, True)
for i in range(nact_belpi[h]): ref.set_beta_bit(irrep_start[h] + i, True)
print(f'Reference determinant: {ref.str(nact)}')
Explanation: Creating the HF determinant
Next we do some bookeeping to find out the occupation of the Hartree-Fock determinant using the occupation returned to us by psi4
End of explanation
as_ints.slater_rules(ref,ref) + as_ints.scalar_energy() + as_ints.nuclear_repulsion_energy()
Explanation: We can now compute the energy of the determinant as $\langle \Phi | \hat{H} | \Phi \rangle$ using the slater_rules function in the ActiveSpaceIntegrals class
End of explanation
import itertools
import functools
dets = []
orbs = range(nact)
# get the symmetry of each active orbital
act_sym = mo_space_info.symmetry('ACTIVE')
nact_ael = sum(nact_aelpi)
nact_bel = sum(nact_belpi)
print(f'Number of alpha electrons: {nact_ael}')
print(f'Number of beta electrons: {nact_bel}')
# specify the target symmetry
sym = 0
# generate all the alpha strings
for astr in itertools.combinations(orbs, nact_ael):
# compute the symmetry of the alpha string
asym = functools.reduce(lambda i, j: act_sym[i] ^ act_sym[j], astr)
# generate all the beta strings
for bstr in itertools.combinations(orbs, nact_bel):
# compute the symmetry of the beta string
bsym = functools.reduce(lambda i, j: act_sym[i] ^ act_sym[j], bstr)
# if the determinant has the correct symmetry save it
if (asym ^ bsym) == sym:
d = forte.Determinant()
for i in astr: d.set_alfa_bit(i, True)
for i in bstr: d.set_beta_bit(i, True)
dets.append(d)
print(f'==> List of FCI determinants <==')
for d in dets:
print(f'{d.str(4)}')
Explanation: Creating the FCI determinant basis
Next we enumerate the FCI determinants. Here we use symmetry information and generate only those determinants that have the desired symmetry. We do it in a wasteful way, because we simply generate all the combinations of alpha/beta electrons and then check for the symmetry of the determinant.
End of explanation
import numpy as np
ndets = len(dets)
H = np.ndarray((ndets,ndets))
for I, detI in enumerate(dets):
for J, detJ in enumerate(dets):
H[I][J] = as_ints.slater_rules(detI,detJ)
# or we could use the more fancy looping below that avoid computing half of the matrix elements
# for I, detI in enumerate(dets):
# H[I][I] = as_ints.slater_rules(detI,detI) # diagonal term
# for J, detJ in enumerate(dets[:I]):
# HIJ = as_ints.slater_rules(detI,detJ) # off-diagonal term (only upper half)
# H[I][J] = H[J][I] = HIJ
print(H)
evals, evecs = np.linalg.eigh(H)
psi4_fci = -74.846380133240530
print(f'FCI Energy = {evals[0] + as_ints.scalar_energy() + as_ints.nuclear_repulsion_energy()}')
print(f'FCI Energy Error = {evals[0] + as_ints.scalar_energy() + as_ints.nuclear_repulsion_energy()- psi4_fci}')
index_hf = dets.index(ref)
print(f'Index of the HF determinant in the FCI vector {index_hf}')
Explanation: Diagonalize the Hamiltonian in the FCI space
In the last step, we diagonalize the Hamiltonian in the FCI determinant basis. We use the function slater_rules from the ActiveSpaceIntegrals class, which implements Slater rules to compute the matrix elements $\langle \Phi_I | \hat{H} | \Phi_J \rangle$.
End of explanation |
9,095 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introducing the Keras Sequential API
Learning Objectives
1. Build a DNN model using the Keras Sequential API
1. Learn how to use feature columns in a Keras model
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to deploy and make predictions with the Keras model
Introduction
The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs.
In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton.
Start by importing the necessary libraries for this lab.
Step1: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
Step2: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook.
Step3: Build a simple keras DNN model
We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.
In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()
We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
Step4: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
Step5: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments
Step6: Train the model
To train your model, Keras provides two functions that can be used
Step7: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
Step8: High-level model evaluation
Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
Step9: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
Step10: Making predictions with our model
To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
Step11: Export and deploy our model
Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
Step12: Deploy our model to AI Platform
Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons. | Python Code:
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
Explanation: Introducing the Keras Sequential API
Learning Objectives
1. Build a DNN model using the Keras Sequential API
1. Learn how to use feature columns in a Keras model
1. Learn how to train a model with Keras
1. Learn how to save/load, and deploy a Keras model on GCP
1. Learn how to deploy and make predictions with the Keras model
Introduction
The Keras sequential API allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs.
In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using AI Platform and see how to call our model for online prediciton.
Start by importing the necessary libraries for this lab.
End of explanation
!ls -l ../data/*.csv
!head ../data/taxi*.csv
Explanation: Load raw data
We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into ../data.
End of explanation
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
Explanation: Use tf.data to read the CSV files
We wrote these functions for reading data from the csv files above in the previous notebook.
End of explanation
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname) for colname in INPUT_COLS
}
Explanation: Build a simple keras DNN model
We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow feature columns guide.
In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use tf.feature_column.numeric_column()
We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop.
End of explanation
# Build a keras DNN model using Sequential API
model = Sequential(
[
DenseFeatures(feature_columns=feature_columns.values()),
Dense(units=32, activation="relu", name="h1"),
Dense(units=8, activation="relu", name="h2"),
Dense(units=1, activation="linear", name="output"),
]
)
Explanation: Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model.
End of explanation
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
Explanation: Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:
An optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class.
A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the Losses class (such as categorical_crossentropy or mse), or it can be a custom objective function.
A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.
We will add an additional custom metric called rmse to our list of metrics which will return the root mean square error.
End of explanation
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
Explanation: Train the model
To train your model, Keras provides two functions that can be used:
1. .fit() for training a model for a fixed number of epochs (iterations on a dataset).
2. .train_on_batch() runs a single gradient update on a single batch of data.
The .fit() function works for various formats of data such as Numpy array, list of Tensors tf.data and Python generators. The .train_on_batch() method is for more fine-grained control over training and accepts only a single batch of data.
Our create_dataset function above generates batches of training examples, so we can use .fit.
We start by setting up some parameters for our training job and create the data generators for the training and validation data.
We refer you the the blog post ML Design Pattern #3: Virtual Epochs for further details on why express the training in terms of NUM_TRAIN_EXAMPLES and NUM_EVALS and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
End of explanation
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)],
)
Explanation: There are various arguments you can set when calling the .fit method. Here x specifies the input data which in our case is a tf.data dataset returning a tuple of (inputs, targets). The steps_per_epoch parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the callback argument we specify a Tensorboard callback so we can inspect Tensorboard after training.
End of explanation
model.summary()
Explanation: High-level model evaluation
Once we've run data through the model, we can call .summary() on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
End of explanation
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
Explanation: Running .fit (or .fit_generator) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
End of explanation
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
Explanation: Making predictions with our model
To make predictions with our trained model, we can call the predict method, passing to it a dictionary of values. The steps parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set steps=1 (setting steps=None would also work). Note, however, that if x is a tf.data dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
End of explanation
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
Explanation: Export and deploy our model
Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
End of explanation
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-east1"
MODEL_NAME = f"taxifare_{TIMESTAMP}"
VERSION_NAME = "dnn"
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["MODEL_NAME"] = MODEL_NAME
os.environ["VERSION_NAME"] = VERSION_NAME
%%bash
gcloud config set project $PROJECT
gcloud config set ai_platform/region $REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
%%bash
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
!echo "Creating $MODEL_NAME:$VERSION_NAME"
!gcloud ai-platform versions create $VERSION_NAME \
--model=$MODEL_NAME \
--framework=tensorflow \
--python-version=3.7 \
--runtime-version=2.3 \
--origin=$EXPORT_PATH \
--staging-bucket=gs://$BUCKET
%%writefile input.json
{"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0}
%%bash
gcloud ai-platform predict \
--model $MODEL_NAME \
--json-instances input.json \
--version $VERSION_NAME
Explanation: Deploy our model to AI Platform
Finally, we will deploy our trained model to AI Platform and see how we can make online predicitons.
End of explanation |
9,096 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1> Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
Step1: Read data created in the previous chapter.
Step2: <h2> Input function to read from Pandas Dataframe into tf.constant </h2>
Step3: Create feature columns for estimator
Step4: <h3> Linear Regression with tf.Estimator framework </h3>
Step5: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
Step6: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
Step7: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
Step13: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset. | Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
Explanation: <h1> Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
End of explanation
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
Explanation: Read data created in the previous chapter.
End of explanation
def make_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x = df,
y = df[LABEL],
batch_size = 128,
num_epochs = num_epochs,
shuffle = True,
queue_capacity = 1000,
num_threads = 1
)
Explanation: <h2> Input function to read from Pandas Dataframe into tf.constant </h2>
End of explanation
def make_feature_cols():
input_columns = [tf.feature_column.numeric_column(k) for k in FEATURES]
return input_columns
Explanation: Create feature columns for estimator
End of explanation
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 10))
Explanation: <h3> Linear Regression with tf.Estimator framework </h3>
End of explanation
def print_rmse(model, name, df):
metrics = model.evaluate(input_fn = make_input_fn(df, 1))
print('RMSE on {} dataset = {}'.format(name, np.sqrt(metrics['average_loss'])))
print_rmse(model, 'validation', df_valid)
Explanation: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
End of explanation
import itertools
# Read saved model and use it for prediction
model = tf.estimator.LinearRegressor(
feature_columns = make_feature_cols(), model_dir = OUTDIR)
preds_iter = model.predict(input_fn = make_input_fn(df_valid, 1))
print([pred['predictions'][0] for pred in list(itertools.islice(preds_iter, 5))])
Explanation: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
End of explanation
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
model = tf.estimator.DNNRegressor(hidden_units = [32, 8, 2],
feature_columns = make_feature_cols(), model_dir = OUTDIR)
model.train(input_fn = make_input_fn(df_train, num_epochs = 100));
print_rmse(model, 'validation', df_valid)
Explanation: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
End of explanation
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
Creates a query with the proper splits.
Args:
phase: int, 1=train, 2=valid.
EVERY_N: int, take an example EVERY_N rows.
Returns:
Query string with the proper splits.
base_query =
WITH daynames AS
(SELECT ['Sun', 'Mon', 'Tues', 'Wed', 'Thurs', 'Fri', 'Sat'] AS daysofweek)
SELECT
(tolls_amount + fare_amount) AS fare_amount,
daysofweek[ORDINAL(EXTRACT(DAYOFWEEK FROM pickup_datetime))] AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count AS passengers,
'notneeded' AS key
FROM
`nyc-tlc.yellow.trips`, daynames
WHERE
trip_distance > 0 AND fare_amount > 0
if EVERY_N is None:
if phase < 2:
# training
query = {0} AND ABS(MOD(FARM_FINGERPRINT(CAST
(pickup_datetime AS STRING), 4)) < 2.format(base_query)
else:
query = {0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING), 4)) = {1}.format(base_query, phase)
else:
query = {0} AND ABS(MOD(FARM_FINGERPRINT(CAST(
pickup_datetime AS STRING)), {1})) = {2}.format(
base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, 'benchmark', df)
Explanation: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset.
End of explanation |
9,097 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Test datasets
http
Step1: General guides to Bayesian regression
http | Python Code:
import pandas as pd
import statsmodels.api as sm
# Normal response variable
stackloss_conversion = sm.datasets.get_rdataset("stackloss", "datasets")
#print (stackloss_conversion.__doc__)
# Lognormal response variable
engel_food = sm.datasets.engel.load_pandas()
#print (engel_food.data)
# Binary response variable
titanic_survival = sm.datasets.get_rdataset("Titanic", "datasets")
#print (titanic_survival.__doc__)
# Continuous 0-1 response variable
duncan_prestige = sm.datasets.get_rdataset("Duncan", "car")
#print (duncan_prestige.__doc__)
# Categorical response variable
iris_flowers = sm.datasets.get_rdataset("iris")
#print (iris_flowers.__doc__)
Explanation: Test datasets
http://statsmodels.sourceforge.net/0.6.0/datasets/index.html
End of explanation
# Showing plots inline, rather than in a new window
%matplotlib inline
# Modules
from pymc3 import *
import numpy as np
from ggplot import *
# Generating data
size = 200
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=.5, size=size)
# Plotting data
sim_data = pd.DataFrame({"x" : x, "y" : y})
sim_plot = ggplot(sim_data, aes(x="x", y="y")) + geom_point() +\
geom_abline(intercept=true_intercept, slope=true_slope)
print(sim_plot)
Explanation: General guides to Bayesian regression
http://twiecki.github.io/blog/2015/11/10/mcmc-sampling/
PyMC
https://github.com/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers
http://conference.scipy.org/scipy2014/schedule/presentation/1662/
End of explanation |
9,098 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Building a Regression Model for a Financial Dataset
In this notebook, you will build a simple linear regression model to predict the closing AAPL stock price. The lab objectives are
Step1: Note
Step2: Pull Data from BigQuery
In this section we'll use a magic function to query a BigQuery table and then store the output in a Pandas dataframe. A magic function is just an alias to perform a system command. To see documentation on the "bigquery" magic function execute the following cell
Step3: View the first five rows of the query's output. Note that the object df containing the query output is a Pandas Dataframe.
Step4: Visualize data
The simplest plot you can make is to show the closing stock price as a time series. Pandas DataFrames have built in plotting funtionality based on Matplotlib.
Step5: You can also embed the trend_3_day variable into the time series above.
Step6: Build a Regression Model in Scikit-Learn
In this section you'll train a linear regression model to predict AAPL closing prices when given the previous day's closing price day_prev_close and the three day trend trend_3_day. A training set and test set are created by sequentially splitting the data after 2000 rows.
Step7: The model's predictions are more or less in line with the truth. However, the utility of the model depends on the business context (i.e. you won't be making any money with this model). It's fair to question whether the variable trend_3_day even adds to the performance of the model | Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
!pip install --user google-cloud-bigquery==1.25.0
Explanation: Building a Regression Model for a Financial Dataset
In this notebook, you will build a simple linear regression model to predict the closing AAPL stock price. The lab objectives are:
* Pull data from BigQuery into a Pandas dataframe
* Use Matplotlib to visualize data
* Use Scikit-Learn to build a regression model
End of explanation
%%bash
bq mk -d ai4f
bq load --autodetect --source_format=CSV ai4f.AAPL10Y gs://cloud-training/ai4f/AAPL10Y.csv
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn import linear_model
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
plt.rc('figure', figsize=(12, 8.0))
Explanation: Note: Restart your kernel to use updated packages.
Kindly ignore the deprecation warnings and incompatibility errors related to google-cloud-storage.
End of explanation
%%bigquery df
WITH
raw AS (
SELECT
date,
close,
LAG(close, 1) OVER(ORDER BY date) AS min_1_close,
LAG(close, 2) OVER(ORDER BY date) AS min_2_close,
LAG(close, 3) OVER(ORDER BY date) AS min_3_close,
LAG(close, 4) OVER(ORDER BY date) AS min_4_close
FROM
`ai4f.AAPL10Y`
ORDER BY
date DESC ),
raw_plus_trend AS (
SELECT
date,
close,
min_1_close,
IF (min_1_close - min_2_close > 0, 1, -1) AS min_1_trend,
IF (min_2_close - min_3_close > 0, 1, -1) AS min_2_trend,
IF (min_3_close - min_4_close > 0, 1, -1) AS min_3_trend
FROM
raw ),
train_data AS (
SELECT
date,
close,
min_1_close AS day_prev_close,
IF (min_1_trend + min_2_trend + min_3_trend > 0, 1, -1) AS trend_3_day
FROM
raw_plus_trend
ORDER BY
date ASC )
SELECT
*
FROM
train_data
Explanation: Pull Data from BigQuery
In this section we'll use a magic function to query a BigQuery table and then store the output in a Pandas dataframe. A magic function is just an alias to perform a system command. To see documentation on the "bigquery" magic function execute the following cell:
The query below selects everything you'll need to build a regression model to predict the closing price of AAPL stock. The model will be very simple for the purposes of demonstrating BQML functionality. The only features you'll use as input into the model are the previous day's closing price and a three day trend value. The trend value can only take on two values, either -1 or +1. If the AAPL stock price has increased over any two of the previous three days then the trend will be +1. Otherwise, the trend value will be -1.
Note, the features you'll need can be generated from the raw table ai4f.AAPL10Y using Pandas functions. However, it's better to take advantage of the serverless-ness of BigQuery to do the data pre-processing rather than applying the necessary transformations locally.
End of explanation
print(type(df))
df.dropna(inplace=True)
df.head()
Explanation: View the first five rows of the query's output. Note that the object df containing the query output is a Pandas Dataframe.
End of explanation
df.plot(x='date', y='close');
Explanation: Visualize data
The simplest plot you can make is to show the closing stock price as a time series. Pandas DataFrames have built in plotting funtionality based on Matplotlib.
End of explanation
start_date = '2018-06-01'
end_date = '2018-07-31'
plt.plot(
'date', 'close', 'k--',
data = (
df.loc[pd.to_datetime(df.date).between(start_date, end_date)]
)
)
plt.scatter(
'date', 'close', color='b', label='pos trend',
data = (
df.loc[df.trend_3_day == 1 & pd.to_datetime(df.date).between(start_date, end_date)]
)
)
plt.scatter(
'date', 'close', color='r', label='neg trend',
data = (
df.loc[(df.trend_3_day == -1) & pd.to_datetime(df.date).between(start_date, end_date)]
)
)
plt.legend()
plt.xticks(rotation = 90);
df.shape
Explanation: You can also embed the trend_3_day variable into the time series above.
End of explanation
features = ['day_prev_close', 'trend_3_day']
target = 'close'
X_train, X_test = df.loc[:2000, features], df.loc[2000:, features]
y_train, y_test = df.loc[:2000, target], df.loc[2000:, target]
# Create linear regression object
regr = linear_model.LinearRegression(fit_intercept=False)
# Train the model using the training set
regr.fit(X_train, y_train)
# Make predictions using the testing set
y_pred = regr.predict(X_test)
# The mean squared error
print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, y_pred))))
# Explained variance score: 1 is perfect prediction
print('Variance Score: {0:.2f}'.format(r2_score(y_test, y_pred)))
plt.scatter(y_test, y_pred)
plt.plot([140, 240], [140, 240], 'r--', label='perfect fit')
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.legend();
Explanation: Build a Regression Model in Scikit-Learn
In this section you'll train a linear regression model to predict AAPL closing prices when given the previous day's closing price day_prev_close and the three day trend trend_3_day. A training set and test set are created by sequentially splitting the data after 2000 rows.
End of explanation
print('Root Mean Squared Error: {0:.2f}'.format(np.sqrt(mean_squared_error(y_test, X_test.day_prev_close))))
Explanation: The model's predictions are more or less in line with the truth. However, the utility of the model depends on the business context (i.e. you won't be making any money with this model). It's fair to question whether the variable trend_3_day even adds to the performance of the model:
End of explanation |
9,099 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dense Sentiment Classifier
In this notebook, we build a dense neural net to classify IMDB movie reviews by their sentiment.
Load dependencies
Step1: Set hyperparameters
Step2: Load data
For a given data set
Step3: Restoring words from index
Step4: Preprocess data
Step5: Design neural network architecture
Step6: Configure model
Step7: Train!
Step8: Evaluate | Python Code:
import keras
from keras.datasets import imdb
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense, Flatten, Dropout
from keras.layers import Embedding # new!
from keras.callbacks import ModelCheckpoint # new!
import os # new!
from sklearn.metrics import roc_auc_score, roc_curve # new!
import pandas as pd
import matplotlib.pyplot as plt # new!
%matplotlib inline
Explanation: Dense Sentiment Classifier
In this notebook, we build a dense neural net to classify IMDB movie reviews by their sentiment.
Load dependencies
End of explanation
# output directory name:
output_dir = 'model_output/dense'
# training:
epochs = 4
batch_size = 128
# vector-space embedding:
n_dim = 64
n_unique_words = 5000 # as per Maas et al. (2011); may not be optimal
n_words_to_skip = 50 # ditto
max_review_length = 100
pad_type = trunc_type = 'pre'
# neural network architecture:
n_dense = 64
dropout = 0.5
Explanation: Set hyperparameters
End of explanation
(x_train, y_train), (x_valid, y_valid) = imdb.load_data(num_words=n_unique_words, skip_top=n_words_to_skip)
x_train[0:6] # 0 reserved for padding; 1 would be starting character; 2 is unknown; 3 is most common word, etc.
for x in x_train[0:6]:
print(len(x))
y_train[0:6]
len(x_train), len(x_valid)
Explanation: Load data
For a given data set:
the Keras text utilities here quickly preprocess natural language and convert it into an index
the keras.preprocessing.text.Tokenizer class may do everything you need in one line:
tokenize into words or characters
num_words: maximum unique tokens
filter out punctuation
lower case
convert words to an integer index
End of explanation
word_index = keras.datasets.imdb.get_word_index()
word_index = {k:(v+3) for k,v in word_index.items()}
word_index["PAD"] = 0
word_index["START"] = 1
word_index["UNK"] = 2
word_index
index_word = {v:k for k,v in word_index.items()}
x_train[0]
' '.join(index_word[id] for id in x_train[0])
(all_x_train,_),(all_x_valid,_) = imdb.load_data()
' '.join(index_word[id] for id in all_x_train[0])
Explanation: Restoring words from index
End of explanation
x_train = pad_sequences(x_train, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_valid = pad_sequences(x_valid, maxlen=max_review_length, padding=pad_type, truncating=trunc_type, value=0)
x_train[0:6]
for x in x_train[0:6]:
print(len(x))
' '.join(index_word[id] for id in x_train[0])
' '.join(index_word[id] for id in x_train[5])
Explanation: Preprocess data
End of explanation
# CODE HERE
model.summary() # so many parameters!
# embedding layer dimensions and parameters:
n_dim, n_unique_words, n_dim*n_unique_words
# ...flatten:
max_review_length, n_dim, n_dim*max_review_length
# ...dense:
n_dense, n_dim*max_review_length*n_dense + n_dense # weights + biases
# ...and output:
n_dense + 1
Explanation: Design neural network architecture
End of explanation
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
modelcheckpoint = ModelCheckpoint(filepath=output_dir+"/weights.{epoch:02d}.hdf5")
if not os.path.exists(output_dir):
os.makedirs(output_dir)
Explanation: Configure model
End of explanation
# 84.7% validation accuracy in epoch 2
model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_valid, y_valid), callbacks=[modelcheckpoint])
Explanation: Train!
End of explanation
model.load_weights(output_dir+"/weights.01.hdf5") # zero-indexed
y_hat = model.predict_proba(x_valid)
len(y_hat)
y_hat[0]
plt.hist(y_hat)
_ = plt.axvline(x=0.5, color='orange')
pct_auc = roc_auc_score(y_valid, y_hat)*100.0
"{:0.2f}".format(pct_auc)
float_y_hat = []
for y in y_hat:
float_y_hat.append(y[0])
ydf = pd.DataFrame(list(zip(float_y_hat, y_valid)), columns=['y_hat', 'y'])
ydf.head(10)
' '.join(index_word[id] for id in all_x_valid[0])
' '.join(index_word[id] for id in all_x_valid[6])
ydf[(ydf.y == 0) & (ydf.y_hat > 0.9)].head(10)
' '.join(index_word[id] for id in all_x_valid[489])
ydf[(ydf.y == 1) & (ydf.y_hat < 0.1)].head(10)
' '.join(index_word[id] for id in all_x_valid[927])
Explanation: Evaluate
End of explanation |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.