Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
13,900 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Point Particles
Step1: While this would work for defining a single molecule or very small system, this would not be efficient for large systems. Instead, the clone and translate operator can be used to facilitate automation. Below, we simply define a single prototype particle (lj_proto), which we then copy and translate about the system.
Note, mBuild provides two different translate operations, "translate" and "translate_to". "translate" moves a particle by adding the vector the original position, whereas "translate_to" move a particle to the specified location in space. Note, "translate_to" maintains the internal spatial relationships of a collection of particles by first shifting the center of mass of the collection of particles to the origin, then translating to the specified location. Since the lj_proto particle in this example starts at the origin, these two commands produce identical behavior.
Step2: To simplify this process, mBuild provides several build-in patterning tools, where for example, Grid3DPattern can be used to perform this same operation. Grid3DPattern generates a set of points, from 0 to 1, which get stored in the variable "pattern". We need only loop over the points in pattern, cloning, translating, and adding to the system. Note, because Grid3DPattern defines points between 0 and 1, they must be scaled based on the desired system size, i.e., pattern.scale(2).
Step3: Larger systems can therefore be easily generated by toggling the values given to Grid3DPattern. Other patterns can also be generated using the same basic code, such as a 2D grid pattern
Step4: Points on a sphere can be generated using SpherePattern. Points on a disk using DisKPattern, etc.
Note to show both simultaneously, we shift the x-coordinate of Particles in the sphere by -1 (i.e., pos[0]-=1.0) and +1 for the disk (i.e, pos[0]+=1.0).
Step5: We can also take advantage of the hierachical nature of mBuild to accomplish the same task more cleanly. Below we create a component that corresponds to the sphere (class SphereLJ), and one that corresponds to the disk (class DiskLJ), and then instantiate and shift each of these individually in the MonoLJ component.
Step6: Again, since mBuild is hierarchical, the pattern functions can be used to generate large systems of any arbitary component. For example, we can replicate the SphereLJ component on a regular array.
Step7: Several functions exist for rotating compounds. For example, the spin command allows a compound to be rotated, in place, about a specific axis (i.e., it considers the origin for the rotation to lie at the compound's center of mass).
Step8: Configurations can be dumped to file using the save command; this takes advantage of MDTraj and supports a range of file formats (see http | Python Code:
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_particle1 = mb.Particle(name='LJ', pos=[0, 0, 0])
self.add(lj_particle1)
lj_particle2 = mb.Particle(name='LJ', pos=[1, 0, 0])
self.add(lj_particle2)
lj_particle3 = mb.Particle(name='LJ', pos=[0, 1, 0])
self.add(lj_particle3)
lj_particle4 = mb.Particle(name='LJ', pos=[0, 0, 1])
self.add(lj_particle4)
lj_particle5 = mb.Particle(name='LJ', pos=[1, 0, 1])
self.add(lj_particle5)
lj_particle6 = mb.Particle(name='LJ', pos=[1, 1, 0])
self.add(lj_particle6)
lj_particle7 = mb.Particle(name='LJ', pos=[0, 1, 1])
self.add(lj_particle7)
lj_particle8 = mb.Particle(name='LJ', pos=[1, 1, 1])
self.add(lj_particle8)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: Point Particles: Basic system initialization
Note: mBuild expects all distance units to be in nanometers.
This tutorial focuses on the usage of basic system initialization operations, as applied to simple point particle systems (i.e., generic Lennard-Jones particles rather than specific atoms).
The code below defines several point particles in a cubic arrangement. Note, the color and radius associated with a Particle name can be set and passed to the visualize command. Colors are passed in hex format (see http://www.color-hex.com/color/bfbfbf).
End of explanation
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
for i in range(0,2):
for j in range(0,2):
for k in range(0,2):
lj_particle = mb.clone(lj_proto)
pos = [i,j,k]
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: While this would work for defining a single molecule or very small system, this would not be efficient for large systems. Instead, the clone and translate operator can be used to facilitate automation. Below, we simply define a single prototype particle (lj_proto), which we then copy and translate about the system.
Note, mBuild provides two different translate operations, "translate" and "translate_to". "translate" moves a particle by adding the vector the original position, whereas "translate_to" move a particle to the specified location in space. Note, "translate_to" maintains the internal spatial relationships of a collection of particles by first shifting the center of mass of the collection of particles to the origin, then translating to the specified location. Since the lj_proto particle in this example starts at the origin, these two commands produce identical behavior.
End of explanation
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid3DPattern(2, 2, 2)
pattern.scale(2)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: To simplify this process, mBuild provides several build-in patterning tools, where for example, Grid3DPattern can be used to perform this same operation. Grid3DPattern generates a set of points, from 0 to 1, which get stored in the variable "pattern". We need only loop over the points in pattern, cloning, translating, and adding to the system. Note, because Grid3DPattern defines points between 0 and 1, they must be scaled based on the desired system size, i.e., pattern.scale(2).
End of explanation
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid2DPattern(5, 5)
pattern.scale(5)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: Larger systems can therefore be easily generated by toggling the values given to Grid3DPattern. Other patterns can also be generated using the same basic code, such as a 2D grid pattern:
End of explanation
import mbuild as mb
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(200)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
pos[0]-=1.0
mb.translate(lj_particle, pos)
self.add(lj_particle)
pattern_disk = mb.DiskPattern(200)
pattern_disk.scale(0.5)
for pos in pattern_disk:
lj_particle = mb.clone(lj_proto)
pos[0]+=1.0
mb.translate(lj_particle, pos)
self.add(lj_particle)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: Points on a sphere can be generated using SpherePattern. Points on a disk using DisKPattern, etc.
Note to show both simultaneously, we shift the x-coordinate of Particles in the sphere by -1 (i.e., pos[0]-=1.0) and +1 for the disk (i.e, pos[0]+=1.0).
End of explanation
import mbuild as mb
class SphereLJ(mb.Compound):
def __init__(self):
super(SphereLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(200)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class DiskLJ(mb.Compound):
def __init__(self):
super(DiskLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_disk = mb.DiskPattern(200)
pattern_disk.scale(0.5)
for pos in pattern_disk:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
sphere = SphereLJ();
pos=[-1, 0, 0]
mb.translate(sphere, pos)
self.add(sphere)
disk = DiskLJ();
pos=[1, 0, 0]
mb.translate(disk, pos)
self.add(disk)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: We can also take advantage of the hierachical nature of mBuild to accomplish the same task more cleanly. Below we create a component that corresponds to the sphere (class SphereLJ), and one that corresponds to the disk (class DiskLJ), and then instantiate and shift each of these individually in the MonoLJ component.
End of explanation
import mbuild as mb
class SphereLJ(mb.Compound):
def __init__(self):
super(SphereLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern_sphere = mb.SpherePattern(13)
pattern_sphere.scale(0.5)
for pos in pattern_sphere:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
sphere = SphereLJ();
pattern = mb.Grid3DPattern(3, 3, 3)
pattern.scale(10)
for pos in pattern:
lj_sphere = mb.clone(sphere)
mb.translate_to(lj_sphere, pos)
#shift the particle so the center of mass
#of the system is at the origin
mb.translate(lj_sphere, [-5,-5,-5])
self.add(lj_sphere)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: Again, since mBuild is hierarchical, the pattern functions can be used to generate large systems of any arbitary component. For example, we can replicate the SphereLJ component on a regular array.
End of explanation
import mbuild as mb
import random
from numpy import pi
class CubeLJ(mb.Compound):
def __init__(self):
super(CubeLJ, self).__init__()
lj_proto = mb.Particle(name='LJ', pos=[0, 0, 0])
pattern = mb.Grid3DPattern(2, 2, 2)
pattern.scale(1)
for pos in pattern:
lj_particle = mb.clone(lj_proto)
mb.translate(lj_particle, pos)
self.add(lj_particle)
class MonoLJ(mb.Compound):
def __init__(self):
super(MonoLJ, self).__init__()
cube_proto = CubeLJ();
pattern = mb.Grid3DPattern(3, 3, 3)
pattern.scale(10)
rnd = random.Random()
rnd.seed(123)
for pos in pattern:
lj_cube = mb.clone(cube_proto)
mb.translate_to(lj_cube, pos)
#shift the particle so the center of mass
#of the system is at the origin
mb.translate(lj_cube, [-5,-5,-5])
mb.spin(lj_cube, rnd.uniform(0, 2 * pi), [1, 0, 0])
mb.spin(lj_cube, rnd.uniform(0, 2 * pi), [0, 1, 0])
mb.spin(lj_cube, rnd.uniform(0, 2 * pi), [0, 0, 1])
self.add(lj_cube)
monoLJ = MonoLJ()
monoLJ.visualize()
Explanation: Several functions exist for rotating compounds. For example, the spin command allows a compound to be rotated, in place, about a specific axis (i.e., it considers the origin for the rotation to lie at the compound's center of mass).
End of explanation
#save as xyz file
monoLJ.save('output.xyz')
#save as mol2
monoLJ.save('output.mol2')
Explanation: Configurations can be dumped to file using the save command; this takes advantage of MDTraj and supports a range of file formats (see http://MDTraj.org).
End of explanation |
13,901 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
.. _tut_io_export_pandas
Step1: Export DataFrame
Step2: Explore Pandas MultiIndex | Python Code:
# Author: Denis Engemann <[email protected]>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname)
# For simplicity we will only consider the first 10 epochs
events = mne.read_events(event_fname)[:10]
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = -0.2, 0.5
baseline = (None, 0)
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=True, reject=reject)
Explanation: .. _tut_io_export_pandas:
Export epochs to Pandas DataFrame
In this example the pandas exporter will be used to produce a DataFrame
object. After exploring some basic features a split-apply-combine
work flow will be conducted to examine the latencies of the response
maxima across epochs and conditions.
Note. Equivalent methods are available for raw and evoked data objects.
Short Pandas Primer
Pandas Data Frames
~~~~~~~~~~~~~~~~~~
A data frame can be thought of as a combination of matrix, list and dict:
It knows about linear algebra and element-wise operations but is size mutable
and allows for labeled access to its data. In addition, the pandas data frame
class provides many useful methods for restructuring, reshaping and visualizing
data. As most methods return data frame instances, operations can be chained
with ease; this allows to write efficient one-liners. Technically a DataFrame
can be seen as a high-level container for numpy arrays and hence switching
back and forth between numpy arrays and DataFrames is very easy.
Taken together, these features qualify data frames for inter operation with
databases and for interactive data exploration / analysis.
Additionally, pandas interfaces with the R statistical computing language that
covers a huge amount of statistical functionality.
Export Options
~~~~~~~~~~~~~~
The pandas exporter comes with a few options worth being commented.
Pandas DataFrame objects use a so called hierarchical index. This can be
thought of as an array of unique tuples, in our case, representing the higher
dimensional MEG data in a 2D data table. The column names are the channel names
from the epoch object. The channels can be accessed like entries of a
dictionary:
df['MEG 2333']
Epochs and time slices can be accessed with the .ix method:
epochs_df.ix[(1, 2), 'MEG 2333']
However, it is also possible to include this index as regular categorial data
columns which yields a long table format typically used for repeated measure
designs. To take control of this feature, on export, you can specify which
of the three dimensions 'condition', 'epoch' and 'time' is passed to the Pandas
index using the index parameter. Note that this decision is revertible any
time, as demonstrated below.
Similarly, for convenience, it is possible to scale the times, e.g. from
seconds to milliseconds.
Some Instance Methods
~~~~~~~~~~~~~~~~~~~~~
Most numpy methods and many ufuncs can be found as instance methods, e.g.
mean, median, var, std, mul, , max, argmax etc.
Below an incomplete listing of additional useful data frame instance methods:
apply : apply function to data.
Any kind of custom function can be applied to the data. In combination with
lambda this can be very useful.
describe : quickly generate summary stats
Very useful for exploring data.
groupby : generate subgroups and initialize a 'split-apply-combine' operation.
Creates a group object. Subsequently, methods like apply, agg, or transform
can be used to manipulate the underlying data separately but
simultaneously. Finally, reset_index can be used to combine the results
back into a data frame.
plot : wrapper around plt.plot
However it comes with some special options. For examples see below.
shape : shape attribute
gets the dimensions of the data frame.
values :
return underlying numpy array.
to_records :
export data as numpy record array.
to_dict :
export data as dict of arrays.
Reference
~~~~~~~~~
More information and additional introductory materials can be found at the
pandas doc sites: http://pandas.pydata.org/pandas-docs/stable/
End of explanation
# The following parameters will scale the channels and times plotting
# friendly. The info columns 'epoch' and 'time' will be used as hierarchical
# index whereas the condition is treated as categorial data. Note that
# this is optional. By passing None you could also print out all nesting
# factors in a long table style commonly used for analyzing repeated measure
# designs.
index, scale_time, scalings = ['epoch', 'time'], 1e3, dict(grad=1e13)
df = epochs.to_data_frame(picks=None, scalings=scalings, scale_time=scale_time,
index=index)
# Create MEG channel selector and drop EOG channel.
meg_chs = [c for c in df.columns if 'MEG' in c]
df.pop('EOG 061') # this works just like with a list.
Explanation: Export DataFrame
End of explanation
# Pandas is using a MultiIndex or hierarchical index to handle higher
# dimensionality while at the same time representing data in a flat 2d manner.
print(df.index.names, df.index.levels)
# Inspecting the index object unveils that 'epoch', 'time' are used
# for subsetting data. We can take advantage of that by using the
# .ix attribute, where in this case the first position indexes the MultiIndex
# and the second the columns, that is, channels.
# Plot some channels across the first three epochs
xticks, sel = np.arange(3, 600, 120), meg_chs[:15]
df.ix[:3, sel].plot(xticks=xticks)
mne.viz.tight_layout()
# slice the time starting at t0 in epoch 2 and ending 500ms after
# the base line in epoch 3. Note that the second part of the tuple
# represents time in milliseconds from stimulus onset.
df.ix[(1, 0):(3, 500), sel].plot(xticks=xticks)
mne.viz.tight_layout()
# Note: For convenience the index was converted from floating point values
# to integer values. To restore the original values you can e.g. say
# df['times'] = np.tile(epoch.times, len(epochs_times)
# We now reset the index of the DataFrame to expose some Pandas
# pivoting functionality. To simplify the groupby operation we
# we drop the indices to treat epoch and time as categroial factors.
df = df.reset_index()
# The ensuing DataFrame then is split into subsets reflecting a crossing
# between condition and trial number. The idea is that we can broadcast
# operations into each cell simultaneously.
factors = ['condition', 'epoch']
sel = factors + ['MEG 1332', 'MEG 1342']
grouped = df[sel].groupby(factors)
# To make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
# Now we compare the mean of two channels response across conditions.
grouped.mean().plot(kind='bar', stacked=True, title='Mean MEG Response',
color=['steelblue', 'orange'])
mne.viz.tight_layout()
# We can even accomplish more complicated tasks in a few lines calling
# apply method and passing a function. Assume we wanted to know the time
# slice of the maximum response for each condition.
max_latency = grouped[sel[2]].apply(lambda x: df.time[x.argmax()])
print(max_latency)
# Then make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
plt.figure()
max_latency.plot(kind='barh', title='Latency of Maximum Reponse',
color=['steelblue'])
mne.viz.tight_layout()
# Finally, we will again remove the index to create a proper data table that
# can be used with statistical packages like statsmodels or R.
final_df = max_latency.reset_index()
final_df.rename(columns={0: sel[2]}) # as the index is oblivious of names.
# The index is now written into regular columns so it can be used as factor.
print(final_df)
plt.show()
# To save as csv file, uncomment the next line.
# final_df.to_csv('my_epochs.csv')
# Note. Data Frames can be easily concatenated, e.g., across subjects.
# E.g. say:
#
# import pandas as pd
# group = pd.concat([df_1, df_2])
# group['subject'] = np.r_[np.ones(len(df_1)), np.ones(len(df_2)) + 1]
Explanation: Explore Pandas MultiIndex
End of explanation |
13,902 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Laboratoire d'introduction au filtrage
Cours NSC-2006, année 2015
Méthodes quantitatives en neurosciences
Pierre Bellec, Yassine Ben Haj Ali
Objectifs
Step1: Section 1
Step2: Représentez noyau et signal en temps, à l'aide de la commande plot. Utiliser les temps d'acquisition corrects, et labéliser les axes (xlabel, ylabel). Comment est généré signal? reconnaissez vous le processus employé? Est ce que le signal est périodique? si oui, quelle est sa période? Peut-on trouver la réponse dans le code?
2. Représenter le contenu fréquentiel de signal avec la commande Analyse_Frequence_Puissance.
Utilisez la commande ylim pour ajuster les limites de l'axe y et pouvoir bien observer le signal. Notez que l'axe y (puissance) est en échelle log (dB). Quelles sont les fréquences principales contenues dans le signal? Etait-ce attendu?
3. Répétez les questions 1.1 et 1.2 avec un bruit dit blanc, généré ci dessous.
Pourquoi est ce que ce bruit porte ce nom?
Step3: 4. Bruit respiratoire.
Répétez les les questions 1.1 et 1.2 avec un bruit dit respiratoire, généré ci dessous. Est ce une simulation raisonnable de variations liées à la respiration? pourquoi?
Step4: 5. Ligne de base.
Répétez les les questions 1.1 et 1.2 avec une dérive de la ligne de base, telle que générée ci dessous.
Step5: 6. Mélange de signaux.
On va maintenant mélanger nos différentes signaux, tel qu'indiqué ci-dessous. Représentez les trois mélanges en temps et en fréquence, superposé au signal d'intérêt sans aucun bruit (variable signal). Pouvez-vous reconnaitre la contribution de chaque source dans le mélange fréquentiel? Est ce que les puissances de fréquences s'additionnent systématiquement?
Step6: Section 2
Step7: 2.2 Répétez la question 2.1 avec un noyau plus gros.
Commentez qualitativement sur la qualité du débruitage. Comparez quantitativement les deux filtres avec une mesure d'erreur résiduelle suivante
Step8: 2.3 Réponse des filtres de Butterworth.
Ces filtres sont disponibles dans des fonctions que vous avez déjà utilisé lors du laboratoire sur la transformée de Fourier
Step9: Représentez le noyau en temps et en fréquence. Quelle est la fréquence de coupure du filtre?
2.4. Application du filtre de Butterworth.
L'exemple ci dessous filtre le signal avec un filtre passe bas, avec une fréquence de coupure de 0.1 | Python Code:
%matplotlib inline
from pymatbridge import Octave
octave = Octave()
octave.start()
%load_ext pymatbridge
Explanation: Laboratoire d'introduction au filtrage
Cours NSC-2006, année 2015
Méthodes quantitatives en neurosciences
Pierre Bellec, Yassine Ben Haj Ali
Objectifs:
Ce laboratoire a pour but de vous initier au filtrage de signaux temporels avec Matlab. Nous allons travailler avec un signal simulé qui contient plusieurs sources, une d'intérêt et d'autres qui sont du bruit.
- Nous allons tout d'abord nous familiariser avec les différentes sources de signal, en temps et en fréquence.
- Nous allons ensuite chercher un filtrage qui permette d'éliminer le bruit sans altérer de maniére forte le signal.
- Enfin, nous évaluerons l'impact d'une perte de résolution temporelle sur notre capacité à débruiter le signal, lié au phénomène de repliement de fréquences (aliasing).
Pour réaliser ce laboratoire, il est nécessaire de récupérer la
ressource suivante sur studium:
labo8_filtrage.zip: cette archive contient plusieurs codes et jeux de données. SVP décompressez l'archive et copiez les fichiers dans votre répertoire de travail Matlab.
De nombreuses portions du labo consiste à modifier un code réalisé dans une autre question. Il est donc fortement conseillé d'ouvrir un nouveau fichier dans l'éditeur matlab, et d'exécuter le code depuis l'éditeur, de façon à pouvoir copier des paragraphes de code rapidement. Ne pas tenir compte et ne pas exécuter cette partie du code:
End of explanation
%%matlab
%% Définition du signal d'intêret
% fréquence du signal
freq = 1;
% on crée des blocs off/on de 15 secondes
bloc = repmat([zeros(1,15*freq) ones(1,15*freq)],[1 10]);
% les temps d'acquisition
ech = (0:(1/freq):(length(bloc)/freq)-(1/freq));
% ce paramètre fixe le pic de la réponse hémodynamique
pic = 5;
% noyau de réponse hémodynamique
noyau = [linspace(0,1,(pic*freq)+1) linspace(1,-0.3,(pic*freq)/2) linspace(-0.3,0,(pic*freq)/2)];
noyau = [zeros(1,length(noyau)-1) noyau];
% normalisation du noyau
noyau = noyau/sum(abs(noyau));
% convolution du bloc avec le noyau
signal = conv(bloc,noyau,'same');
% on fixe la moyenne de la réponse à zéro
signal = signal - mean(signal);
Explanation: Section 1: Exemple de signaux, temps et fréquence
1. Commençons par générer un signal d'intêret:
End of explanation
%%matlab
%% définition du bruit blanc
bruit = 0.05*randn(size(signal));
Explanation: Représentez noyau et signal en temps, à l'aide de la commande plot. Utiliser les temps d'acquisition corrects, et labéliser les axes (xlabel, ylabel). Comment est généré signal? reconnaissez vous le processus employé? Est ce que le signal est périodique? si oui, quelle est sa période? Peut-on trouver la réponse dans le code?
2. Représenter le contenu fréquentiel de signal avec la commande Analyse_Frequence_Puissance.
Utilisez la commande ylim pour ajuster les limites de l'axe y et pouvoir bien observer le signal. Notez que l'axe y (puissance) est en échelle log (dB). Quelles sont les fréquences principales contenues dans le signal? Etait-ce attendu?
3. Répétez les questions 1.1 et 1.2 avec un bruit dit blanc, généré ci dessous.
Pourquoi est ce que ce bruit porte ce nom?
End of explanation
%%matlab
%% définition du signal de respiration
% fréquence de la respiration
freq_resp = 0.3;
% un modéle simple (cosinus) des fluctuations liées à la respiration
resp = cos(2*pi*freq_resp*ech/freq);
% fréquence de modulation lente de l'amplitude respiratoire
freq_mod = 0.01;
% modulation de l'amplitude du signal lié à la respiration
resp = resp.*(ones(size(resp))-0.1*cos(2*pi*freq_mod*ech/freq));
% on force une moyenne nulle, et une amplitude max de 0.1
resp = 0.1*(resp-mean(resp));
Explanation: 4. Bruit respiratoire.
Répétez les les questions 1.1 et 1.2 avec un bruit dit respiratoire, généré ci dessous. Est ce une simulation raisonnable de variations liées à la respiration? pourquoi?
End of explanation
%%matlab
%% définition de la ligne de base
base = 0.1*(ech-mean(ech))/mean(ech);
Explanation: 5. Ligne de base.
Répétez les les questions 1.1 et 1.2 avec une dérive de la ligne de base, telle que générée ci dessous.
End of explanation
%%matlab
%% Mélanges de signaux
y_sr = signal + resp;
y_srb = signal + resp + bruit;
y_srbb = signal + resp + bruit + base;
Explanation: 6. Mélange de signaux.
On va maintenant mélanger nos différentes signaux, tel qu'indiqué ci-dessous. Représentez les trois mélanges en temps et en fréquence, superposé au signal d'intérêt sans aucun bruit (variable signal). Pouvez-vous reconnaitre la contribution de chaque source dans le mélange fréquentiel? Est ce que les puissances de fréquences s'additionnent systématiquement?
End of explanation
%%matlab
%%définition d'un noyau de moyenne mobile
% taille de la fenêtre pour la moyenne mobile, en nombre d'échantillons temporels
taille = ceil(3*freq);
% le noyau, défini sur une fenêtre identique aux signaux précédents
noyau = [zeros(1,(length(signal)-taille)/2) ones(1,taille) zeros(1,(length(signal)-taille)/2)];
% normalisation du moyau
noyau = noyau/sum(abs(noyau));
%% convolution avec le noyau (filtrage)
y_f = conv(y_sr,noyau,'same');
Explanation: Section 2: Optimisation de filtre
2.1. Nous allons commencer par appliquer un filtre de moyenne mobile, avec le signal le plus simple (y_sr).
Pour cela on crée un noyau et on applique une convolution, comme indiqué ci dessous. Représentez le noyau en fréquence (avec Analyse_Frequence_Puissance), commentez sur l'impact fréquentiel de la convolution. Faire un deuxième graphe représentant le signal d'intérêt superposé au signal filtré.
End of explanation
%%matlab
err = sqrt(mean((signal-y_f).^2))
Explanation: 2.2 Répétez la question 2.1 avec un noyau plus gros.
Commentez qualitativement sur la qualité du débruitage. Comparez quantitativement les deux filtres avec une mesure d'erreur résiduelle suivante:
End of explanation
%%matlab
%% Définition d'une implusion finie unitaire
impulsion = zeros(size(signal));
impulsion(round(length(impulsion)/2))=1;
noyau = FiltrePasseHaut(impulsion,freq,0.1);
Explanation: 2.3 Réponse des filtres de Butterworth.
Ces filtres sont disponibles dans des fonctions que vous avez déjà utilisé lors du laboratoire sur la transformée de Fourier:
- FiltrePasseHaut.m: suppression des basses fréquences.
- FiltrePasseBas.m: suppression des hautes fréquences.
Le filtre de Butterworth n'utilise pas explicitement un noyau de convolution. Mais comme il s'agit d'un systéme linéaire invariant dans le temps, on peut toujours récupérer le noyau en regardant la réponse à une impulsion finie unitaire, définie comme suit:
End of explanation
%%matlab
y = y_sr;
y_f = FiltrePasseBas(y,freq,0.1);
Explanation: Représentez le noyau en temps et en fréquence. Quelle est la fréquence de coupure du filtre?
2.4. Application du filtre de Butterworth.
L'exemple ci dessous filtre le signal avec un filtre passe bas, avec une fréquence de coupure de 0.1:
End of explanation |
13,903 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
Caveat for the interpretation of "significant" clusters
Step1: Set parameters
Step2: Read epochs for the channel of interest
Step3: Find the FieldTrip neighbor definition to setup sensor connectivity
Step4: Compute permutation statistic
How does it work? We use clustering to bind together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read
Step5: Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the connectivity definition.
It can be used for single trials or for groups of subjects.
Visualize clusters | Python Code:
# Authors: Denis Engemann <[email protected]>
# Jona Sassenhagen <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mne.viz import plot_topomap
import mne
from mne.stats import spatio_temporal_cluster_test
from mne.datasets import sample
from mne.channels import find_ch_connectivity
from mne.viz import plot_compare_evokeds
print(__doc__)
Explanation: Spatiotemporal permutation F-test on full sensor data
Tests for differential evoked responses in at least
one condition using a permutation clustering test.
The FieldTrip neighbor templates will be used to determine
the adjacency between sensors. This serves as a spatial prior
to the clustering. Spatiotemporal clusters will then
be visualized using custom matplotlib code.
Caveat for the interpretation of "significant" clusters: see
the FieldTrip website_.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id = {'Aud/L': 1, 'Aud/R': 2, 'Vis/L': 3, 'Vis/R': 4}
tmin = -0.2
tmax = 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.filter(1, 30, fir_design='firwin')
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
picks = mne.pick_types(raw.info, meg='mag', eog=True)
reject = dict(mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=None, reject=reject, preload=True)
epochs.drop_channels(['EOG 061'])
epochs.equalize_event_counts(event_id)
X = [epochs[k].get_data() for k in event_id] # as 3D matrix
X = [np.transpose(x, (0, 2, 1)) for x in X] # transpose for clustering
Explanation: Read epochs for the channel of interest
End of explanation
connectivity, ch_names = find_ch_connectivity(epochs.info, ch_type='mag')
print(type(connectivity)) # it's a sparse matrix!
plt.imshow(connectivity.toarray(), cmap='gray', origin='lower',
interpolation='nearest')
plt.xlabel('{} Magnetometers'.format(len(ch_names)))
plt.ylabel('{} Magnetometers'.format(len(ch_names)))
plt.title('Between-sensor adjacency')
Explanation: Find the FieldTrip neighbor definition to setup sensor connectivity
End of explanation
# set cluster threshold
threshold = 50.0 # very high, but the test is quite sensitive on this data
# set family-wise p-value
p_accept = 0.01
cluster_stats = spatio_temporal_cluster_test(X, n_permutations=1000,
threshold=threshold, tail=1,
n_jobs=1, buffer_size=None,
connectivity=connectivity)
T_obs, clusters, p_values, _ = cluster_stats
good_cluster_inds = np.where(p_values < p_accept)[0]
Explanation: Compute permutation statistic
How does it work? We use clustering to bind together features which are
similar. Our features are the magnetic fields measured over our sensor
array at different times. This reduces the multiple comparison problem.
To compute the actual test-statistic, we first sum all F-values in all
clusters. We end up with one statistic for each cluster.
Then we generate a distribution from the data by shuffling our conditions
between our samples and recomputing our clusters and the test statistics.
We test for the significance of a given cluster by computing the probability
of observing a cluster of that size. For more background read:
Maris/Oostenveld (2007), "Nonparametric statistical testing of EEG- and
MEG-data" Journal of Neuroscience Methods, Vol. 164, No. 1., pp. 177-190.
doi:10.1016/j.jneumeth.2007.03.024
End of explanation
# configure variables for visualization
colors = {"Aud": "crimson", "Vis": 'steelblue'}
linestyles = {"L": '-', "R": '--'}
# get sensor positions via layout
pos = mne.find_layout(epochs.info).pos
# organize data for plotting
evokeds = {cond: epochs[cond].average() for cond in event_id}
# loop over clusters
for i_clu, clu_idx in enumerate(good_cluster_inds):
# unpack cluster information, get unique indices
time_inds, space_inds = np.squeeze(clusters[clu_idx])
ch_inds = np.unique(space_inds)
time_inds = np.unique(time_inds)
# get topography for F stat
f_map = T_obs[time_inds, ...].mean(axis=0)
# get signals at the sensors contributing to the cluster
sig_times = epochs.times[time_inds]
# create spatial mask
mask = np.zeros((f_map.shape[0], 1), dtype=bool)
mask[ch_inds, :] = True
# initialize figure
fig, ax_topo = plt.subplots(1, 1, figsize=(10, 3))
# plot average test statistic and mark significant sensors
image, _ = plot_topomap(f_map, pos, mask=mask, axes=ax_topo, cmap='Reds',
vmin=np.min, vmax=np.max, show=False)
# create additional axes (for ERF and colorbar)
divider = make_axes_locatable(ax_topo)
# add axes for colorbar
ax_colorbar = divider.append_axes('right', size='5%', pad=0.05)
plt.colorbar(image, cax=ax_colorbar)
ax_topo.set_xlabel(
'Averaged F-map ({:0.3f} - {:0.3f} s)'.format(*sig_times[[0, -1]]))
# add new axis for time courses and plot time courses
ax_signals = divider.append_axes('right', size='300%', pad=1.2)
title = 'Cluster #{0}, {1} sensor'.format(i_clu + 1, len(ch_inds))
if len(ch_inds) > 1:
title += "s (mean)"
plot_compare_evokeds(evokeds, title=title, picks=ch_inds, axes=ax_signals,
colors=colors, linestyles=linestyles, show=False,
split_legend=True, truncate_yaxis='max_ticks')
# plot temporal cluster extent
ymin, ymax = ax_signals.get_ylim()
ax_signals.fill_betweenx((ymin, ymax), sig_times[0], sig_times[-1],
color='orange', alpha=0.3)
# clean up viz
mne.viz.tight_layout(fig=fig)
fig.subplots_adjust(bottom=.05)
plt.show()
Explanation: Note. The same functions work with source estimate. The only differences
are the origin of the data, the size, and the connectivity definition.
It can be used for single trials or for groups of subjects.
Visualize clusters
End of explanation |
13,904 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Word Frequencies
Can we identify different types of text documents based on the frequency of their words? Can we identify different authors, styles, or disciplines like medical versus information technology?
We can start with counting the occurance of words in a document. Hereby, words should be converted to one case (e.g. lower case), and all punctuation characters should be eliminated.
Our program reads a (plain) text file, isolates individual words, and computes their frequencies in the document.
The following steps outline the process
Step1: For example
Step2: Load everything at once
Step3: Note
Step4: Read everything at once...
Step5: Pull text from Hadoop File System (HDFS)
We're usually interested in fairly big data sets which we keep on the Hadoop File System. All Hadoop and Spark functions can uncompress text files on the fly. Therefore they are stored in a compressed format (.gz).
Step6: In order to read the text files within an entire directory we have to first get thg list, and then iterate through it.
Step7: Clean up text
We need to know about some string operations
In particular how to change to lower case and replace special characters.
Step8: Lists and Tuples
Review list operations, such appending elements, concatenating lists, etc. Python also provides a structure for tuples which are quite useful.
Step9: Dictonaries
Dictionaries serve as associative arrays that binds keys to values. These can be used to keep track of the individual words. However, retrieving values from their keys can be time consuming.
Step10: Sorting
Here's an example for sorting a list of tuples. | Python Code:
from urllib.request import urlopen
# from urllib.request import *
# in order to get the help text, we should import the whole subpackage.
import urllib.request
help(urllib.request)
help(urlopen)
Explanation: Word Frequencies
Can we identify different types of text documents based on the frequency of their words? Can we identify different authors, styles, or disciplines like medical versus information technology?
We can start with counting the occurance of words in a document. Hereby, words should be converted to one case (e.g. lower case), and all punctuation characters should be eliminated.
Our program reads a (plain) text file, isolates individual words, and computes their frequencies in the document.
The following steps outline the process:
1. load text data
2. clean up text, convert characters, and transform to a list of words
3. count the occurance of words
Load Text Data
The following shows how to load data from a web-site, local file system, and the Hadoop File System.
Pull text documents from the web
Instead of saving documents on the local file system, we can also load them directly from the Web. The mechanism of loading from an URL is different from opening a local file is quite different. Fortumately, libraries like urllib make this operating fairly easy.
End of explanation
with urlopen('http://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt') as src:
txt = src.readlines()
for t in txt[244:250]:
print(t.decode())
Explanation: For example: load the collection of Shakespear's work and print a couple of rows. (The first 244 lines of this particular document are copyright information, and should be skipped.)
End of explanation
data = urlopen('http://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt').read().decode()
data[0:100]
Explanation: Load everything at once:
End of explanation
with open('textfiles/shakespeare.txt', 'r') as src:
txt = src.readlines()
for t in txt[0:10]:
print(t) ## Note: we don't need to decode the string
Explanation: Note: there is a difference between read and readlines. While read loads the entire content into string of bytes, readline allow to iterate over sections of the input stream that are separated by the new-line character(s).
Pull text from local files
Alternatively, we may just read from a local file.
End of explanation
txt = open('textfiles/shakespeare.txt', 'r').read()
txt[0:100]
Explanation: Read everything at once...
End of explanation
import zlib
from hdfs import InsecureClient
client = InsecureClient('http://backend-0-0:50070')
with client.read('/user/pmolnar/data/20news/20news-bydate-test/talk.politics.mideast/77239.gz') as reader:
txt = zlib.decompress(reader.read(), 16+zlib.MAX_WBITS).decode()
txt[0:100]
txt.split('\n')
Explanation: Pull text from Hadoop File System (HDFS)
We're usually interested in fairly big data sets which we keep on the Hadoop File System. All Hadoop and Spark functions can uncompress text files on the fly. Therefore they are stored in a compressed format (.gz).
End of explanation
dir_list = client.list('/user/pmolnar/data/20news/20news-bydate-test/talk.politics.mideast/')
dir_list[0:10]
text_docs = []
for f in dir_list:
with client.read('/user/pmolnar/data/20news/20news-bydate-test/talk.politics.mideast/%s' % f) as reader:
txt = zlib.decompress(reader.read(), 16+zlib.MAX_WBITS).decode()
text_docs.append(txt)
print("Read %d text files." % len(text_docs))
text_docs[1:3]
Explanation: In order to read the text files within an entire directory we have to first get thg list, and then iterate through it.
End of explanation
import string
help(string)
txt = open("textfiles/shakespeare.txt").read()
txt[0:100]
txt = txt.lower()
for c in '.;!\'" ':
txt = txt.replace(c, '\n')
txt[0:100]
word_list = txt.split('\n')
word_list[0:10]
Explanation: Clean up text
We need to know about some string operations
In particular how to change to lower case and replace special characters.
End of explanation
help(list)
help(tuple)
# Example
a = []
a.append('a')
a.append('z')
a += ['b', 'x', 'c']
a.sort()
a[0:2]
Explanation: Lists and Tuples
Review list operations, such appending elements, concatenating lists, etc. Python also provides a structure for tuples which are quite useful.
End of explanation
help(dict)
f = { 'one': 1, 'two': 2}
f['a'] = 0
f
f['one']
f.keys()
f.values()
Ω = 17
Δ
'a' in f.keys()
f['b']
Explanation: Dictonaries
Dictionaries serve as associative arrays that binds keys to values. These can be used to keep track of the individual words. However, retrieving values from their keys can be time consuming.
End of explanation
l2 = [3,4,1,45,7,234,123]
l2.sort()
l2
l = [(3,'a'), (9, 'z'), (1, 'y'), (1, 'b'), (5, 'd'), (7, 'x')]
l
def take_first(x):
return x[0]
l.sort(key=take_first)
l
l.sort(key=lambda x: x[0], reverse=True)
l
sorted(l, key=lambda x: x[0], reverse=True)
l
l3 = [10, 110, 12, 1203]
l3.sort(key=lambda x: str(x))
l3
help(sorted)
# curl http://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt | tail -n +245 | tr 'A-Z' 'a-z'| tr ' .?:,;' '\n' | sort | uniq -c | sort -rn | more
txt = open('textfiles/shakespeare.txt', 'r').read()
txt[0:100]
txt2 = txt.replace(',', '\n').replace('.', '\n').replace('?', '\n').replace('!', '\n').replace('\'', '\n').replace('"', '\n').lower()
txt2[0:100]
wordlist = txt2.split()
wordlist.sort()
results = []
current_word = wordlist[0]
current_counter = 1
for w in wordlist[1:]:
if w!=current_word:
results.append((current_word, current_counter))
current_word = w
current_counter = 1
else:
current_counter += 1
results.append((current_word, current_counter))
results.sort(key=lambda x: x[1], reverse=True)
results[0:10]
results[0:10]
results.sort(key=lambda x: x[1], reverse=True)
results[0:10]
wordlist = txt2.split()
reshash = {}
for w in wordlist:
if w in reshash.keys():
reshash[w] += 1
else:
reshash[w] = 1
results = [(k, reshash[k]) for k in reshash.keys()]
results.sort(key=lambda x: x[1], reverse=True)
results[0:10]
Explanation: Sorting
Here's an example for sorting a list of tuples.
End of explanation |
13,905 |
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
Sklearn Decision Tree Regressor - Training a Decision Tree Regression Model
| Python Code::
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_squared_error, mean_absolute_error, max_error, explained_variance_score, mean_absolute_percentage_error
# initialise & fit Decision Tree Regressor
model = DecisionTreeRegressor(criterion='squared_error',
max_depth=None,
min_samples_split=2,
min_samples_leaf=1,
random_state=101)
model.fit(X_train, y_train)
# create dictionary that contains feature importance
feature_importance= dict(zip(X_train.columns, model.feature_importances_))
print('Feature Importance',feature_importance)
# make prediction for test data & evaluate performance
y_pred = model.predict(X_test)
print('RMSE:',mean_squared_error(y_test, y_pred, squared = False))
print('MAE:',mean_absolute_error(y_test, y_pred))
print('MAPE:',mean_absolute_percentage_error(y_test, y_pred))
print('Max Error:',max_error(y_test, y_pred))
print('Explained Variance Score:',explained_variance_score(y_test, y_pred))
|
13,906 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Principal Component Analysis
by Rene Zhang and Max Margenot
Part of the Quantopian Lecture Series
Step1: We will introduce PCA with an image processing example. A grayscale digital image can be represented by a matrix, whose $(i,j)^{th}$ entry corresponds to the measurement of gray
scale at the $(i,j)^{th}$ pixel. The following gray-scale image has $200 \times 200$ pixels, though it can be changed on the fly. We store it in a matrix $\mathbf{X}$. The number of rows of the $\mathbf{X}$ is $200$, and the number of columns of $\mathbf{X}$ is $200$.
Step2: We start with a simple checkboard pattern, add some random normal noise, and add a gradient.
Step3: Set each row as a variable, with observations in the columns. Denote the covariance matrix of $\mathbf{X}$ as $\mathbf{C}$, where the size of $\mathbf{C}$ is $m \times m$. $\mathbf{C}$ is a matrix whose $(i,j)^{th}$ entry is the covariance between the $i^{th}$ row and $j^{th}$ row of the matrix $\mathbf{X}$.
Step4: Performing principal component analysis decomposes the matrix $\mathbf{C}$ into
Step5: The function LA.eigh lists the eigenvalues from small to large in $P$. Let us change the order first to list them from largest to smallest and make sure that $\mathbf{L}\mathbf{P}\mathbf{L}^{\top}==\mathbf{C}$.
Step6: Here we plot all of the eigenvalues
Step7: The $i^{th}$ principal component is given as $i^{th}$ row of $\mathbf{V}$,
$$\mathbf{V} =\mathbf{L}^{\top} \mathbf{X}.$$
Step8: If we multiply both sides on the left by $\mathbf{L}$, we get the following
Step9: The proportion of total variance due to the $i^{th}$ principal component is given by the ratio $\frac{\lambda_i}{\lambda_1 + \lambda_2 + \dots \lambda_m}.$ The sum of proportion of total variance should be $1$. As we defined, $\lambda_i$ is $i^{th}$ entry of $\mathbf{P}$,
$$\sum_{i}\frac{P_i}{\text{trace}(P)} = 1$$
Where the trace$(P)$ is the sum of the diagonal of $P$.
Step10: Recall the number of principal components is denoted as $k$. Let $k$ be $10, 20, 30, 60$ as examples and take a look at the corresponding approximated images.
Step11: The number of variables in $X$ is $200$. When reducing the dimension to $k=60$, which uses half of the principal components, the approximated image is close to the original one.
Moving forward, we do not have to do PCA by hand. Luckly, scikit-learn has an implementation that we can use. Next, let us show an example in quantitative finance using sklearn.
PCA on a Portfolio
Construct a portfolio with 10 stocks, IBM, MSFT, FB, T, INTC, ABX, NEM, AU, AEM, GFI. 5 of them are technology related and 5 of them are gold mining companies.
In this case, there are 10 variables (companies), and each column is a variable.
Step12: Notice that the grand bulk of the variance of the returns of these assets can be explained by the first two principal components.
Now we collect the first two principal components and plot their contributions.
Step13: From these principal components we can construct "statistical risk factors", similar to more conventional common risk factors. These should give us an idea of how much of the portfolio's returns comes from some unobservable statistical feature.
Step14: The factor returns here are an analogue to the principal component matrix $\mathbf{V}$ in the image processing example.
Step15: The factor exposures are an analogue to the eigenvector matrix $\mathbf{L}$ in the image processing example. | Python Code:
from numpy import linalg as LA
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Principal Component Analysis
by Rene Zhang and Max Margenot
Part of the Quantopian Lecture Series:
www.quantopian.com/lectures
https://github.com/quantopian/research_public
Applications in many fields, such as image processing, bioinformatics, and quantitative finance, involve large-scale data. Both the size and complexity of this data can make the computations required for analysis practically infeasible. Principal Component Analysis (PCA) is a classical method for dimension reduction. It uses the first several principal components, statistical features that explain most of the variation of a $m \times n$ data matrix $\mathbf{X}$, to describe the large-scale data matrix $\mathbf{X}$ economically.
End of explanation
def generate_test_image(m,n):
X = np.zeros((m,n))
# generate a rectangle
X[25:80,25:80] = 1
# generate a triangle
for i in range(25, 80, 1):
X[i+80:160, 100+i-1] = 2
# generate a circle
for i in range(0,200,1):
for j in range(0,200,1):
if ((i - 135)*(i - 135) +(j - 53)*(j - 53) <= 900):
X[i, j] = 3
return X
X = generate_test_image(200,200)
Explanation: We will introduce PCA with an image processing example. A grayscale digital image can be represented by a matrix, whose $(i,j)^{th}$ entry corresponds to the measurement of gray
scale at the $(i,j)^{th}$ pixel. The following gray-scale image has $200 \times 200$ pixels, though it can be changed on the fly. We store it in a matrix $\mathbf{X}$. The number of rows of the $\mathbf{X}$ is $200$, and the number of columns of $\mathbf{X}$ is $200$.
End of explanation
imgplot = plt.imshow(X, cmap='gray')
plt.title('Original Test Image');
m = X.shape[0] # num of rows
n = X.shape[1] # num of columns
Explanation: We start with a simple checkboard pattern, add some random normal noise, and add a gradient.
End of explanation
X = np.asarray(X, dtype=np.float64)
C = np.cov(X)
np.linalg.matrix_rank(C)
Explanation: Set each row as a variable, with observations in the columns. Denote the covariance matrix of $\mathbf{X}$ as $\mathbf{C}$, where the size of $\mathbf{C}$ is $m \times m$. $\mathbf{C}$ is a matrix whose $(i,j)^{th}$ entry is the covariance between the $i^{th}$ row and $j^{th}$ row of the matrix $\mathbf{X}$.
End of explanation
P, L = LA.eigh(C)
Explanation: Performing principal component analysis decomposes the matrix $\mathbf{C}$ into:
$$\mathbf{C} = \mathbf{L}\mathbf{P}\mathbf{L}^{\top},$$
where $\mathbf{P}$ is a diagonal matrix $\mathbf{P}=\text{diag}(\lambda_1,\lambda_2,\dots,\lambda_m)$, with $\lambda_1 \geq \lambda_1 \geq \dots \lambda_m \geq 0$ being the eigenvalues of matrix $\mathbf{C}$. The matrix $\mathbf{L}$ is an orthogonal matrix, consisting the eigenvectors of matrix $\mathbf{C}$.
End of explanation
P = P[::-1]
L = L[:,::-1]
np.allclose(L.dot(np.diag(P)).dot(L.T), C)
Explanation: The function LA.eigh lists the eigenvalues from small to large in $P$. Let us change the order first to list them from largest to smallest and make sure that $\mathbf{L}\mathbf{P}\mathbf{L}^{\top}==\mathbf{C}$.
End of explanation
plt.semilogy(P, '-o')
plt.xlim([1, P.shape[0]])
plt.xlabel('eigenvalue index')
plt.ylabel('eigenvalue in a log scale')
plt.title('Eigenvalues of Covariance Matrix');
Explanation: Here we plot all of the eigenvalues:
End of explanation
V = L.T.dot(X)
V.shape
Explanation: The $i^{th}$ principal component is given as $i^{th}$ row of $\mathbf{V}$,
$$\mathbf{V} =\mathbf{L}^{\top} \mathbf{X}.$$
End of explanation
k = 200
X_tilde = L[:,0:k-1].dot(L[:,0:k-1].T).dot(X)
np.allclose(X_tilde, X)
plt.imshow(X_tilde, cmap='gray')
plt.title('Approximated Image with full rank');
Explanation: If we multiply both sides on the left by $\mathbf{L}$, we get the following:
$$\mathbf{L}\mathbf{L}^{\top} \mathbf{X}= \mathbf{L}\mathbf{V}.$$
The matrix $\mathbf{L}$ is the set of eigenvectors from a covariance matrix , so $\mathbf{L}\mathbf{L}^{\top} = \mathbf{I}$ and $\mathbf{L}\mathbf{L}^{\top}\mathbf{X} = \mathbf{X}$. The relationship among matrices of $\mathbf{X}$, $\mathbf{L}$, and $\mathbf{V}$ can be expressed as
$$\mathbf{X} = \mathbf{L}\mathbf{V}.$$
To approximate $\mathbf{X}$, we use $k$ eigenvectors that have largest eigenvalues:
$$\mathbf{X} \approx \mathbf{L[:, 1:k]}\mathbf{L[:, 1:k]}^{\top} \mathbf{X}.$$
Denote the approximated $\mathbf{X}$ as $\tilde{\mathbf{X}} = \mathbf{L[:, 1:k]}\mathbf{L[:, 1:k]}^{\top} \mathbf{X}$. When $k = m $, the $\tilde{\mathbf{X}}$ should be same as $\mathbf{X}$.
End of explanation
(P/P.sum()).sum()
plt.plot((P/P.sum()).cumsum(), '-o')
plt.title('Cumulative Sum of the Proportion of Total Variance')
plt.xlabel('index')
plt.ylabel('Proportion');
Explanation: The proportion of total variance due to the $i^{th}$ principal component is given by the ratio $\frac{\lambda_i}{\lambda_1 + \lambda_2 + \dots \lambda_m}.$ The sum of proportion of total variance should be $1$. As we defined, $\lambda_i$ is $i^{th}$ entry of $\mathbf{P}$,
$$\sum_{i}\frac{P_i}{\text{trace}(P)} = 1$$
Where the trace$(P)$ is the sum of the diagonal of $P$.
End of explanation
X_tilde_10 = L[:,0:10-1].dot(V[0:10-1,:])
X_tilde_20 = L[:,0:20-1].dot(V[0:20-1,:])
X_tilde_30 = L[:,0:30-1].dot(V[0:30-1,:])
X_tilde_60 = L[:,0:60-1].dot(V[0:60-1,:])
fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, figsize=(12, 12))
ax1.imshow(X_tilde_10, cmap='gray')
ax1.set(title='Approximated Image with k = 10')
ax2.imshow(X_tilde_20, cmap='gray')
ax2.set(title='Approximated Image with k = 20')
ax3.imshow(X_tilde_30, cmap='gray')
ax3.set(title='Approximated Image with k = 30')
ax4.imshow(X_tilde_60, cmap='gray')
ax4.set(title='Approximated Image with k = 60');
Explanation: Recall the number of principal components is denoted as $k$. Let $k$ be $10, 20, 30, 60$ as examples and take a look at the corresponding approximated images.
End of explanation
symbol = ['IBM','MSFT', 'FB', 'T', 'INTC', 'ABX','NEM', 'AU', 'AEM', 'GFI']
start = "2015-09-01"
end = "2016-11-01"
portfolio_returns = get_pricing(symbol, start_date=start, end_date=end, fields="price").pct_change()[1:]
from sklearn.decomposition import PCA
num_pc = 2
X = np.asarray(portfolio_returns)
[n,m] = X.shape
print 'The number of timestamps is {}.'.format(n)
print 'The number of stocks is {}.'.format(m)
pca = PCA(n_components=num_pc) # number of principal components
pca.fit(X)
percentage = pca.explained_variance_ratio_
percentage_cum = np.cumsum(percentage)
print '{0:.2f}% of the variance is explained by the first 2 PCs'.format(percentage_cum[-1]*100)
pca_components = pca.components_
Explanation: The number of variables in $X$ is $200$. When reducing the dimension to $k=60$, which uses half of the principal components, the approximated image is close to the original one.
Moving forward, we do not have to do PCA by hand. Luckly, scikit-learn has an implementation that we can use. Next, let us show an example in quantitative finance using sklearn.
PCA on a Portfolio
Construct a portfolio with 10 stocks, IBM, MSFT, FB, T, INTC, ABX, NEM, AU, AEM, GFI. 5 of them are technology related and 5 of them are gold mining companies.
In this case, there are 10 variables (companies), and each column is a variable.
End of explanation
x = np.arange(1,len(percentage)+1,1)
plt.subplot(1, 2, 1)
plt.bar(x, percentage*100, align = "center")
plt.title('Contribution of principal components',fontsize = 16)
plt.xlabel('principal components',fontsize = 16)
plt.ylabel('percentage',fontsize = 16)
plt.xticks(x,fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim([0, num_pc+1])
plt.subplot(1, 2, 2)
plt.plot(x, percentage_cum*100,'ro-')
plt.xlabel('principal components',fontsize = 16)
plt.ylabel('percentage',fontsize = 16)
plt.title('Cumulative contribution of principal components',fontsize = 16)
plt.xticks(x,fontsize = 16)
plt.yticks(fontsize = 16)
plt.xlim([1, num_pc])
plt.ylim([50,100]);
Explanation: Notice that the grand bulk of the variance of the returns of these assets can be explained by the first two principal components.
Now we collect the first two principal components and plot their contributions.
End of explanation
factor_returns = X.dot(pca_components.T)
factor_returns = pd.DataFrame(columns=["factor 1", "factor 2"],
index=portfolio_returns.index,
data=factor_returns)
factor_returns.head()
Explanation: From these principal components we can construct "statistical risk factors", similar to more conventional common risk factors. These should give us an idea of how much of the portfolio's returns comes from some unobservable statistical feature.
End of explanation
factor_exposures = pd.DataFrame(index=["factor 1", "factor 2"],
columns=portfolio_returns.columns,
data = pca.components_).T
factor_exposures
Explanation: The factor returns here are an analogue to the principal component matrix $\mathbf{V}$ in the image processing example.
End of explanation
labels = factor_exposures.index
data = factor_exposures.values
plt.subplots_adjust(bottom = 0.1)
plt.scatter(
data[:, 0], data[:, 1], marker='o', s=300, c='m',
cmap=plt.get_cmap('Spectral'))
plt.title('Scatter Plot of Coefficients of PC1 and PC2')
plt.xlabel('factor exposure of PC1')
plt.ylabel('factor exposure of PC2')
for label, x, y in zip(labels, data[:, 0], data[:, 1]):
plt.annotate(
label,
xy=(x, y), xytext=(-20, 20),
textcoords='offset points', ha='right', va='bottom',
bbox=dict(boxstyle='round,pad=0.5', fc='yellow', alpha=0.5),
arrowprops=dict(arrowstyle = '->', connectionstyle='arc3,rad=0')
);
Explanation: The factor exposures are an analogue to the eigenvector matrix $\mathbf{L}$ in the image processing example.
End of explanation |
13,907 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'miroc-es2h', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: MIROC
Source ID: MIROC-ES2H
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:40
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
13,908 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1"><a href="#Data-Wrangling-with-Pandas"><span class="toc-item-num">1 </span>Data Wrangling with Pandas</a></div><div class="lev2"><a href="#Date/Time-data-handling"><span class="toc-item-num">1.1 </span>Date/Time data handling</a></div><div class="lev2"><a href="#Merging-and-joining-DataFrame-objects"><span class="toc-item-num">1.2 </span>Merging and joining DataFrame objects</a></div><div class="lev2"><a href="#Concatenation"><span class="toc-item-num">1.3 </span>Concatenation</a></div><div class="lev2"><a href="#Exercise-1"><span class="toc-item-num">1.4 </span>Exercise 1</a></div><div class="lev2"><a href="#Reshaping-DataFrame-objects"><span class="toc-item-num">1.5 </span>Reshaping DataFrame objects</a></div><div class="lev2"><a href="#Pivoting"><span class="toc-item-num">1.6 </span>Pivoting</a></div><div class="lev2"><a href="#Data-transformation"><span class="toc-item-num">1.7 </span>Data transformation</a></div><div class="lev3"><a href="#Dealing-with-duplicates"><span class="toc-item-num">1.7.1 </span>Dealing with duplicates</a></div><div class="lev3"><a href="#Value-replacement"><span class="toc-item-num">1.7.2 </span>Value replacement</a></div><div class="lev3"><a href="#Inidcator-variables"><span class="toc-item-num">1.7.3 </span>Inidcator variables</a></div><div class="lev2"><a href="#Categorical-Data"><span class="toc-item-num">1.8 </span>Categorical Data</a></div><div class="lev3"><a href="#Discretization"><span class="toc-item-num">1.8.1 </span>Discretization</a></div><div class="lev3"><a href="#Permutation-and-sampling"><span class="toc-item-num">1.8.2 </span>Permutation and sampling</a></div><div class="lev2"><a href="#Data-aggregation-and-GroupBy-operations"><span class="toc-item-num">1.9 </span>Data aggregation and GroupBy operations</a></div><div class="lev3"><a href="#Apply"><span class="toc-item-num">1.9.1 </span>Apply</a></div><div class="lev2"><a href="#Exercise-2"><span class="toc-item-num">1.10 </span>Exercise 2</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.11 </span>References</a></div>
# Data Wrangling with Pandas
Now that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.
As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.
Step1: Date/Time data handling
Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis.
The datetime built-in library handles temporal information down to the nanosecond.
Step2: In addition to datetime there are simpler objects for date and time information only, respectively.
Step3: Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times
Step4: In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.
For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.
Step5: For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram
Step6: Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful
Step7: We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.
Step8: Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.
Step9: The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.
Step10: We can convert all the dates in a particular column by using the apply method.
Step11: As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.
Step12: Pandas also has a custom NA value for missing datetime objects, NaT.
Step13: Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.
The read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import
Step14: Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example
Step15: This can be used to easily filter rows by particular temporal attributes
Step16: In addition, time zone information can be applied
Step17: Merging and joining DataFrame objects
Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table.
Step18: The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example
Step19: Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.
Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.
Step20: The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.
Looking at the two datasets that we wish to merge
Step21: we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.
Step22: In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table.
Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables
Step23: Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.
Step24: This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
Concatenation
A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience "functions" c_ and r_
Step25: Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there.
This operation is also called binding or stacking.
With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.
Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.
Step26: Let's give the index and columns meaningful labels
Step27: The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
Step28: If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated
Step29: However, the index is no longer unique, due to overlap between the two DataFrames.
Step30: Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.
Step31: If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.
Step32: If we wanted to use the second table to fill values absent from the first table, we could use combine_first.
Step33: We can also create a hierarchical index based on keys identifying the original tables.
Step34: Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a "wide" format table.
Step35: If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.
Exercise 1
In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame.
Step36: Reshaping DataFrame objects
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable
Step37: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways
Step38: To complement this, unstack pivots from rows back to columns.
Step39: For this dataset, it makes sense to create a hierarchical index based on the patient and observation
Step40: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
Step41: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking
Step42: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one
or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.
Step43: This illustrates the two formats for longitudinal data
Step44: If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table
Step45: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
Step46: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
Step47: Data transformation
There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" which include tasks such as removing duplicate values, replacing values, and grouping values.
Dealing with duplicates
We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name
Step48: Value replacement
Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset
Step49: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
Step50: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values
Step51: In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.
Step52: We can also perform the same replacement that we used map for with replace
Step53: Inidcator variables
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships
Step54: Categorical Data
Pandas provides a convenient dtype for reprsenting categorical (factor) data, called category.
For example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters.
Step55: We can convert this to a category type either by the Categorical constructor, or casting the column using astype
Step56: By default the Categorical type represents an unordered categorical.
Step57: However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order.
Step58: The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels.
Step59: Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower.
There is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!)
Step60: Hence, we get a considerable speedup simply by using the appropriate dtype for our data.
Discretization
Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups
Step61: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's
Step62: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False
Step63: Since the data are now ordinal, rather than numeric, we can give them labels
Step64: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default
Step65: Alternatively, one can specify custom quantiles to act as cut points
Step66: Note that you can easily combine discretiztion with the generation of indicator variables shown above
Step67: Permutation and sampling
For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence.
Step68: Using this sequence as an argument to the take method results in a reordered DataFrame
Step69: Compare this ordering with the original
Step70: For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement
Step71: Data aggregation and GroupBy operations
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple
Step72: This grouped dataset is hard to visualize
Step73: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups
Step74: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
<div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div>
We can aggregate in Pandas using the aggregate (or agg, for short) method
Step75: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean
Step76: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation
Step77: If we wish, we can easily aggregate according to multiple keys
Step78: Alternately, we can transform the data, using a function of our choice with the transform method
Step79: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns
Step80: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed
Step81: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way
Step82: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index
Step83: Apply
We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.
Step84: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship
Step85: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.
Step86: Using the string methods split and join we can create an index that just uses the first three classifications
Step87: However, since there are multiple taxonomic units with the same class, our index is no longer unique
Step88: We can re-establish a unique index by summing all rows with the same class, using groupby
Step89: Exercise 2
Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
Step90: Women and children first?
Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.
Use the groupby method to calculate the proportion of passengers that survived by sex.
Calculate the same proportion, but by class and sex.
Create age categories | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context('notebook')
Explanation: Table of Contents
<p><div class="lev1"><a href="#Data-Wrangling-with-Pandas"><span class="toc-item-num">1 </span>Data Wrangling with Pandas</a></div><div class="lev2"><a href="#Date/Time-data-handling"><span class="toc-item-num">1.1 </span>Date/Time data handling</a></div><div class="lev2"><a href="#Merging-and-joining-DataFrame-objects"><span class="toc-item-num">1.2 </span>Merging and joining DataFrame objects</a></div><div class="lev2"><a href="#Concatenation"><span class="toc-item-num">1.3 </span>Concatenation</a></div><div class="lev2"><a href="#Exercise-1"><span class="toc-item-num">1.4 </span>Exercise 1</a></div><div class="lev2"><a href="#Reshaping-DataFrame-objects"><span class="toc-item-num">1.5 </span>Reshaping DataFrame objects</a></div><div class="lev2"><a href="#Pivoting"><span class="toc-item-num">1.6 </span>Pivoting</a></div><div class="lev2"><a href="#Data-transformation"><span class="toc-item-num">1.7 </span>Data transformation</a></div><div class="lev3"><a href="#Dealing-with-duplicates"><span class="toc-item-num">1.7.1 </span>Dealing with duplicates</a></div><div class="lev3"><a href="#Value-replacement"><span class="toc-item-num">1.7.2 </span>Value replacement</a></div><div class="lev3"><a href="#Inidcator-variables"><span class="toc-item-num">1.7.3 </span>Inidcator variables</a></div><div class="lev2"><a href="#Categorical-Data"><span class="toc-item-num">1.8 </span>Categorical Data</a></div><div class="lev3"><a href="#Discretization"><span class="toc-item-num">1.8.1 </span>Discretization</a></div><div class="lev3"><a href="#Permutation-and-sampling"><span class="toc-item-num">1.8.2 </span>Permutation and sampling</a></div><div class="lev2"><a href="#Data-aggregation-and-GroupBy-operations"><span class="toc-item-num">1.9 </span>Data aggregation and GroupBy operations</a></div><div class="lev3"><a href="#Apply"><span class="toc-item-num">1.9.1 </span>Apply</a></div><div class="lev2"><a href="#Exercise-2"><span class="toc-item-num">1.10 </span>Exercise 2</a></div><div class="lev2"><a href="#References"><span class="toc-item-num">1.11 </span>References</a></div>
# Data Wrangling with Pandas
Now that we have been exposed to the basic functionality of Pandas, lets explore some more advanced features that will be useful when addressing more complex data management tasks.
As most statisticians/data analysts will admit, often the lion's share of the time spent implementing an analysis is devoted to preparing the data itself, rather than to coding or running a particular model that uses the data. This is where Pandas and Python's standard library are beneficial, providing high-level, flexible, and efficient tools for manipulating your data as needed.
End of explanation
from datetime import datetime
now = datetime.now()
now
now.day
now.weekday()
Explanation: Date/Time data handling
Date and time data are inherently problematic. There are an unequal number of days in every month, an unequal number of days in a year (due to leap years), and time zones that vary over space. Yet information about time is essential in many analyses, particularly in the case of time series analysis.
The datetime built-in library handles temporal information down to the nanosecond.
End of explanation
from datetime import date, time
time(3, 24)
date(1970, 9, 3)
Explanation: In addition to datetime there are simpler objects for date and time information only, respectively.
End of explanation
my_age = now - datetime(1970, 1, 1)
my_age
print(type(my_age))
my_age.days/365
Explanation: Having a custom data type for dates and times is convenient because we can perform operations on them easily. For example, we may want to calculate the difference between two times:
End of explanation
segments = pd.read_csv("Data/AIS/transit_segments.csv")
segments.head()
Explanation: In this section, we will manipulate data collected from ocean-going vessels on the eastern seaboard. Vessel operations are monitored using the Automatic Identification System (AIS), a safety at sea navigation technology which vessels are required to maintain and that uses transponders to transmit very high frequency (VHF) radio signals containing static information including ship name, call sign, and country of origin, as well as dynamic information unique to a particular voyage such as vessel location, heading, and speed.
The International Maritime Organization’s (IMO) International Convention for the Safety of Life at Sea requires functioning AIS capabilities on all vessels 300 gross tons or greater and the US Coast Guard requires AIS on nearly all vessels sailing in U.S. waters. The Coast Guard has established a national network of AIS receivers that provides coverage of nearly all U.S. waters. AIS signals are transmitted several times each minute and the network is capable of handling thousands of reports per minute and updates as often as every two seconds. Therefore, a typical voyage in our study might include the transmission of hundreds or thousands of AIS encoded signals. This provides a rich source of spatial data that includes both spatial and temporal information.
For our purposes, we will use summarized data that describes the transit of a given vessel through a particular administrative area. The data includes the start and end time of the transit segment, as well as information about the speed of the vessel, how far it travelled, etc.
End of explanation
segments.seg_length.hist(bins=500)
Explanation: For example, we might be interested in the distribution of transit lengths, so we can plot them as a histogram:
End of explanation
segments.seg_length.apply(np.log).hist(bins=500)
Explanation: Though most of the transits appear to be short, there are a few longer distances that make the plot difficult to read. This is where a transformation is useful:
End of explanation
segments.st_time.dtype
Explanation: We can see that although there are date/time fields in the dataset, they are not in any specialized format, such as datetime.
End of explanation
datetime.strptime(segments.st_time.ix[0], '%m/%d/%y %H:%M')
Explanation: Our first order of business will be to convert these data to datetime. The strptime method parses a string representation of a date and/or time field, according to the expected format of this information.
End of explanation
from dateutil.parser import parse
parse(segments.st_time.ix[0])
Explanation: The dateutil package includes a parser that attempts to detect the format of the date strings, and convert them automatically.
End of explanation
segments.st_time.apply(lambda d: datetime.strptime(d, '%m/%d/%y %H:%M'))
Explanation: We can convert all the dates in a particular column by using the apply method.
End of explanation
pd.to_datetime(segments.st_time[:10])
Explanation: As a convenience, Pandas has a to_datetime method that will parse and convert an entire Series of formatted strings into datetime objects.
End of explanation
pd.to_datetime([None])
Explanation: Pandas also has a custom NA value for missing datetime objects, NaT.
End of explanation
segments = pd.read_csv("Data/AIS/transit_segments.csv", parse_dates=['st_time', 'end_time'])
segments.dtypes
Explanation: Also, if to_datetime() has problems parsing any particular date/time format, you can pass the spec in using the format= argument.
The read_* functions now have an optional parse_dates argument that try to convert any columns passed to it into datetime format upon import:
End of explanation
segments.st_time.dt.month.head()
segments.st_time.dt.hour.head()
Explanation: Columns of the datetime type have an accessor to easily extract properties of the data type. This will return a Series, with the same row index as the DataFrame. For example:
End of explanation
segments[segments.st_time.dt.month==2].head()
Explanation: This can be used to easily filter rows by particular temporal attributes:
End of explanation
segments.st_time.dt.tz_localize('UTC').head()
segments.st_time.dt.tz_localize('UTC').dt.tz_convert('US/Eastern').head()
Explanation: In addition, time zone information can be applied:
End of explanation
vessels = pd.read_csv("Data/AIS/vessel_information.csv", index_col='mmsi')
vessels.head()
[v for v in vessels.type.unique() if v.find('/')==-1]
vessels.type.value_counts()
Explanation: Merging and joining DataFrame objects
Now that we have the vessel transit information as we need it, we may want a little more information regarding the vessels themselves. In the data/AIS folder there is a second table that contains information about each of the ships that traveled the segments in the segments table.
End of explanation
df1 = pd.DataFrame(dict(id=range(4), age=np.random.randint(18, 31, size=4)))
df2 = pd.DataFrame(dict(id=list(range(3))+list(range(3)),
score=np.random.random(size=6)))
df1
df2
pd.merge(df1, df2)
Explanation: The challenge, however, is that several ships have travelled multiple segments, so there is not a one-to-one relationship between the rows of the two tables. The table of vessel information has a one-to-many relationship with the segments.
In Pandas, we can combine tables according to the value of one or more keys that are used to identify rows, much like an index. Using a trivial example:
End of explanation
pd.merge(df1, df2, how='outer')
Explanation: Notice that without any information about which column to use as a key, Pandas did the right thing and used the id column in both tables. Unless specified otherwise, merge will used any common column names as keys for merging the tables.
Notice also that id=3 from df1 was omitted from the merged table. This is because, by default, merge performs an inner join on the tables, meaning that the merged table represents an intersection of the two tables.
End of explanation
segments.head(1)
vessels.head(1)
Explanation: The outer join above yields the union of the two tables, so all rows are represented, with missing values inserted as appropriate. One can also perform right and left joins to include all rows of the right or left table (i.e. first or second argument to merge), but not necessarily the other.
Looking at the two datasets that we wish to merge:
End of explanation
segments_merged = pd.merge(vessels, segments, left_index=True, right_on='mmsi')
segments_merged.head()
Explanation: we see that there is a mmsi value (a vessel identifier) in each table, but it is used as an index for the vessels table. In this case, we have to specify to join on the index for this table, and on the mmsi column for the other.
End of explanation
vessels.merge(segments, left_index=True, right_on='mmsi').head()
Explanation: In this case, the default inner join is suitable; we are not interested in observations from either table that do not have corresponding entries in the other.
Notice that mmsi field that was an index on the vessels table is no longer an index on the merged table.
Here, we used the merge function to perform the merge; we could also have used the merge method for either of the tables:
End of explanation
segments['type'] = 'foo'
pd.merge(vessels, segments, left_index=True, right_on='mmsi').head()
Explanation: Occasionally, there will be fields with the same in both tables that we do not wish to use to join the tables; they may contain different information, despite having the same name. In this case, Pandas will by default append suffixes _x and _y to the columns to uniquely identify them.
End of explanation
np.concatenate([np.random.random(5), np.random.random(5)])
np.r_[np.random.random(5), np.random.random(5)]
np.c_[np.random.random(5), np.random.random(5)]
Explanation: This behavior can be overridden by specifying a suffixes argument, containing a list of the suffixes to be used for the columns of the left and right columns, respectively.
Concatenation
A common data manipulation is appending rows or columns to a dataset that already conform to the dimensions of the exsiting rows or colums, respectively. In NumPy, this is done either with concatenate or the convenience "functions" c_ and r_:
End of explanation
mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb1.shape, mb2.shape
mb1.head()
Explanation: Notice that c_ and r_ are not really functions at all, since it is performing some sort of indexing operation, rather than being called. They are actually class instances, but they are here behaving mostly like functions. Don't think about this too hard; just know that they are there.
This operation is also called binding or stacking.
With Pandas' indexed data structures, there are additional considerations as the overlap in index values between two data structures affects how they are concatenate.
Lets import two microbiome datasets, each consisting of counts of microorganiams from a particular patient. We will use the first column of each dataset as the index.
End of explanation
mb1.columns = mb2.columns = ['Count']
mb1.index.name = mb2.index.name = 'Taxon'
mb1.head()
Explanation: Let's give the index and columns meaningful labels:
End of explanation
mb1.index[:3]
mb1.index.is_unique
Explanation: The index of these data is the unique biological classification of each organism, beginning with domain, phylum, class, and for some organisms, going all the way down to the genus level.
End of explanation
pd.concat([mb1, mb2], axis=0).shape
Explanation: If we concatenate along axis=0 (the default), we will obtain another data frame with the the rows concatenated:
End of explanation
pd.concat([mb1, mb2], axis=0).index.is_unique
Explanation: However, the index is no longer unique, due to overlap between the two DataFrames.
End of explanation
pd.concat([mb1, mb2], axis=1).shape
pd.concat([mb1, mb2], axis=1).head()
Explanation: Concatenating along axis=1 will concatenate column-wise, but respecting the indices of the two DataFrames.
End of explanation
pd.concat([mb1, mb2], axis=1, join='inner').head()
Explanation: If we are only interested in taxa that are included in both DataFrames, we can specify a join=inner argument.
End of explanation
mb1.combine_first(mb2).head()
Explanation: If we wanted to use the second table to fill values absent from the first table, we could use combine_first.
End of explanation
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).head()
pd.concat([mb1, mb2], keys=['patient1', 'patient2']).index.is_unique
Explanation: We can also create a hierarchical index based on keys identifying the original tables.
End of explanation
pd.concat(dict(patient1=mb1, patient2=mb2), axis=1).head()
Explanation: Alternatively, you can pass keys to the concatenation by supplying the DataFrames (or Series) as a dict, resulting in a "wide" format table.
End of explanation
# Loading all the .xls files one by one
mb1 = pd.read_excel('Data/microbiome/MID1.xls', 'Sheet 1', index_col=0, header=None)
mb2 = pd.read_excel('Data/microbiome/MID2.xls', 'Sheet 1', index_col=0, header=None)
mb3 = pd.read_excel('Data/microbiome/MID3.xls', 'Sheet 1', index_col=0, header=None)
mb4 = pd.read_excel('Data/microbiome/MID4.xls', 'Sheet 1', index_col=0, header=None)
mb5 = pd.read_excel('Data/microbiome/MID5.xls', 'Sheet 1', index_col=0, header=None)
mb6 = pd.read_excel('Data/microbiome/MID6.xls', 'Sheet 1', index_col=0, header=None)
mb7 = pd.read_excel('Data/microbiome/MID7.xls', 'Sheet 1', index_col=0, header=None)
mb8 = pd.read_excel('Data/microbiome/MID8.xls', 'Sheet 1', index_col=0, header=None)
mb9 = pd.read_excel('Data/microbiome/MID9.xls', 'Sheet 1', index_col=0, header=None)
# Each of these files contain two column : the name of the taxon and a counter. So we name the second column as "count" to keep the same meaning.
mb1.columns = mb2.columns = mb3.columns = mb4.columns = mb5.columns = mb6.columns = mb7.columns = mb8.columns = mb9.columns = ['Count']
# Same here for the first column by adding the name of the taxon.
mb1.index.name = mb2.index.name = mb3.index.name = mb4.index.name = mb5.index.name = mb6.index.name = mb7.index.name = mb8.index.name = mb9.index.name = 'Taxon'
# Now we'll add three columns which are defined in the metadata file : the barcode, the group and the sample type of each excel file.
dataframe = pd.concat([mb1, mb2, mb3, mb4, mb5, mb6, mb7, mb8, mb9], axis=0)
dataframe['Barcode']=['MID1']*len(mb1) + ['MID2']*len(mb2) + ['MID3']*len(mb3) + ['MID4']*len(mb4)+ ['MID5']*len(mb5)+ ['MID6']*len(mb6)+ ['MID7']*len(mb7)+ ['MID8']*len(mb8)+ ['MID9']*len(mb9)
dataframe['Group']=['Extraction Control']*len(mb1) + ['NEC 1']*len(mb2) + ['Control 1']*len(mb3) + ['NEC 2']*len(mb4)+ ['Control 2']*len(mb5)+ ['NEC 1']*len(mb6)+ ['Control 1']*len(mb7)+ ['NEC 2']*len(mb8)+ ['Control 2']*len(mb9)
dataframe['Sample']=['NA']*len(mb1) + ['tissue']*len(mb2) + ['tissue']*len(mb3) + ['tissue']*len(mb4)+ ['tissue']*len(mb5)+ ['stool']*len(mb6)+ ['stool']*len(mb7)+ ['stool']*len(mb8)+ ['stool']*len(mb9)
dataframe.tail()
type(dataset)
Explanation: If you want concat to work like numpy.concatanate, you may provide the ignore_index=True argument.
Exercise 1
In the data/microbiome subdirectory, there are 9 spreadsheets of microbiome data that was acquired from high-throughput RNA sequencing procedures, along with a 10th file that describes the content of each. Write code that imports each of the data spreadsheets and combines them into a single DataFrame, adding the identifying information from the metadata spreadsheet as columns in the combined DataFrame.
End of explanation
cdystonia = pd.read_csv("Data/cdystonia.csv", index_col=None)
cdystonia.head()
Explanation: Reshaping DataFrame objects
In the context of a single DataFrame, we are often interested in re-arranging the layout of our data.
This dataset is from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia from nine U.S. sites.
Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37)
Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment)
TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
End of explanation
stacked = cdystonia.stack()
stacked
Explanation: This dataset includes repeated measurements of the same individuals (longitudinal data). Its possible to present such information in (at least) two ways: showing each repeated measurement in their own row, or in multiple columns representing multiple measurements.
The stack method rotates the data frame so that columns are represented in rows:
End of explanation
stacked.unstack().head()
Explanation: To complement this, unstack pivots from rows back to columns.
End of explanation
cdystonia2 = cdystonia.set_index(['patient','obs'])
cdystonia2.head()
cdystonia2.index.is_unique
Explanation: For this dataset, it makes sense to create a hierarchical index based on the patient and observation:
End of explanation
twstrs_wide = cdystonia2['twstrs'].unstack('obs')
twstrs_wide.head()
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']]
.drop_duplicates()
.merge(twstrs_wide, right_index=True, left_on='patient', how='inner')
.head())
cdystonia_wide
Explanation: If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
End of explanation
(cdystonia.set_index(['patient','site','id','treat','age','sex','week'])['twstrs']
.unstack('week').head())
Explanation: A slightly cleaner way of doing this is to set the patient-level information as an index before unstacking:
End of explanation
pd.melt(cdystonia_wide, id_vars=['patient','site','id','treat','age','sex'],
var_name='obs', value_name='twsters').head()
Explanation: To convert our "wide" format back to long, we can use the melt function, appropriately parameterized. This function is useful for DataFrames where one
or more columns are identifier variables (id_vars), with the remaining columns being measured variables (value_vars). The measured variables are "unpivoted" to
the row axis, leaving just two non-identifier columns, a variable and its corresponding value, which can both be renamed using optional arguments.
End of explanation
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
Explanation: This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected.
The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them.
Pivoting
The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively.
For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:
End of explanation
cdystonia.pivot('patient', 'obs')
Explanation: If we omit the values argument, we get a DataFrame with hierarchical columns, just as when we applied unstack to the hierarchically-indexed table:
End of explanation
cdystonia.pivot_table(index=['site', 'treat'], columns='week', values='twstrs',
aggfunc=max).head(20)
Explanation: A related method, pivot_table, creates a spreadsheet-like table with a hierarchical index, and allows the values of the table to be populated using an arbitrary aggregation function.
End of explanation
pd.crosstab(cdystonia.sex, cdystonia.site)
Explanation: For a simple cross-tabulation of group frequencies, the crosstab function (not a method) aggregates counts of data according to factors in rows and columns. The factors may be hierarchical if desired.
End of explanation
vessels.duplicated(subset='names')
vessels.drop_duplicates(['names'])
Explanation: Data transformation
There are a slew of additional operations for DataFrames that we would collectively refer to as "transformations" which include tasks such as removing duplicate values, replacing values, and grouping values.
Dealing with duplicates
We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to removed ships from our vessels dataset that have the same name:
End of explanation
cdystonia.treat.value_counts()
Explanation: Value replacement
Frequently, we get data columns that are encoded as strings that we wish to represent numerically for the purposes of including it in a quantitative analysis. For example, consider the treatment variable in the cervical dystonia dataset:
End of explanation
treatment_map = {'Placebo': 0, '5000U': 1, '10000U': 2}
cdystonia['treatment'] = cdystonia.treat.map(treatment_map)
cdystonia.treatment
Explanation: A logical way to specify these numerically is to change them to integer values, perhaps using "Placebo" as a baseline value. If we create a dict with the original values as keys and the replacements as values, we can pass it to the map method to implement the changes.
End of explanation
vals = pd.Series([float(i)**10 for i in range(10)])
vals
np.log(vals)
Explanation: Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method.
An example where replacement is useful is dealing with zeros in certain transformations. For example, if we try to take the log of a set of values:
End of explanation
vals = vals.replace(0, 1e-6)
np.log(vals)
Explanation: In such situations, we can replace the zero with a value so small that it makes no difference to the ensuing analysis. We can do this with replace.
End of explanation
cdystonia2.treat.replace({'Placebo': 0, '5000U': 1, '10000U': 2})
Explanation: We can also perform the same replacement that we used map for with replace:
End of explanation
top5 = vessels.type.isin(vessels.type.value_counts().index[:5])
top5.head(10)
vessels5 = vessels[top5]
pd.get_dummies(vessels5.type).head(10)
Explanation: Inidcator variables
For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward.
Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships:
End of explanation
cdystonia.treat.head()
Explanation: Categorical Data
Pandas provides a convenient dtype for reprsenting categorical (factor) data, called category.
For example, the treat column in the cervical dystonia dataset represents three treatment levels in a clinical trial, and is imported by default as an object type, since it is a mixture of string characters.
End of explanation
pd.Categorical(cdystonia.treat)
cdystonia['treat'] = cdystonia.treat.astype('category')
cdystonia.treat.describe()
Explanation: We can convert this to a category type either by the Categorical constructor, or casting the column using astype:
End of explanation
cdystonia.treat.cat.categories
Explanation: By default the Categorical type represents an unordered categorical.
End of explanation
cdystonia.treat.cat.categories = ['Placebo', '5000U', '10000U']
cdystonia.treat.cat.as_ordered().head()
Explanation: However, an ordering can be imposed. The order is lexical by default, but will assume the order of the listed categories to be the desired order.
End of explanation
cdystonia.treat.cat.codes
Explanation: The important difference between the category type and the object type is that category is represented by an underlying array of integers, which is then mapped to character labels.
End of explanation
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()
segments['name'] = segments.name.astype('category')
%time segments.groupby(segments.name).seg_length.sum().sort_values(ascending=False, inplace=False).head()
Explanation: Notice that these are 8-bit integers, which are essentially single bytes of data, making memory usage lower.
There is also a performance benefit. Consider an operation such as calculating the total segment lengths for each ship in the segments table (this is also a preview of pandas' groupby operation!):
End of explanation
cdystonia.age.describe()
Explanation: Hence, we get a considerable speedup simply by using the appropriate dtype for our data.
Discretization
Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly!
Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:
End of explanation
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90])[:30]
Explanation: Let's transform these data into decades, beginnnig with individuals in their 20's and ending with those in their 80's:
End of explanation
pd.cut(cdystonia.age, [20,30,40,50,60,70,80,90], right=False)[:30]
Explanation: The parentheses indicate an open interval, meaning that the interval includes values up to but not including the endpoint, whereas the square bracket is a closed interval, where the endpoint is included in the interval. We can switch the closure to the left side by setting the right flag to False:
End of explanation
pd.cut(cdystonia.age, [20,40,60,80,90], labels=['young','middle-aged','old','really old'])[:30]
Explanation: Since the data are now ordinal, rather than numeric, we can give them labels:
End of explanation
pd.qcut(cdystonia.age, 4)[:30]
Explanation: A related function qcut uses empirical quantiles to divide the data. If, for example, we want the quartiles -- (0-25%], (25-50%], (50-70%], (75-100%] -- we can just specify 4 intervals, which will be equally-spaced by default:
End of explanation
quantiles = pd.qcut(segments.seg_length, [0, 0.01, 0.05, 0.95, 0.99, 1])
quantiles[:30]
Explanation: Alternatively, one can specify custom quantiles to act as cut points:
End of explanation
pd.get_dummies(quantiles).head(10)
Explanation: Note that you can easily combine discretiztion with the generation of indicator variables shown above:
End of explanation
new_order = np.random.permutation(len(segments))
new_order[:30]
Explanation: Permutation and sampling
For some data analysis tasks, such as simulation, we need to be able to randomly reorder our data, or draw random values from it. Calling NumPy's permutation function with the length of the sequence you want to permute generates an array with a permuted sequence of integers, which can be used to re-order the sequence.
End of explanation
segments.take(new_order).head()
Explanation: Using this sequence as an argument to the take method results in a reordered DataFrame:
End of explanation
segments.head()
Explanation: Compare this ordering with the original:
End of explanation
vessels.sample(n=10)
vessels.sample(n=10, replace=True)
Explanation: For random sampling, DataFrame and Series objects have a sample method that can be used to draw samples, with or without replacement:
End of explanation
cdystonia_grouped = cdystonia.groupby(cdystonia.patient)
Explanation: Data aggregation and GroupBy operations
One of the most powerful features of Pandas is its GroupBy functionality. On occasion we may want to perform operations on groups of observations within a dataset. For exmaple:
aggregation, such as computing the sum of mean of each group, which involves applying a function to each group and returning the aggregated results
slicing the DataFrame into groups and then doing something with the resulting slices (e.g. plotting)
group-wise transformation, such as standardization/normalization
End of explanation
cdystonia_grouped
Explanation: This grouped dataset is hard to visualize
End of explanation
for patient, group in cdystonia_grouped:
print('patient', patient)
print('group', group)
Explanation: However, the grouping is only an intermediate step; for example, we may want to iterate over each of the patient groups:
End of explanation
cdystonia_grouped.agg(np.mean).head()
Explanation: A common data analysis procedure is the split-apply-combine operation, which groups subsets of data together, applies a function to each of the groups, then recombines them into a new data table.
For example, we may want to aggregate our data with with some function.
<div align="right">*(figure taken from "Python for Data Analysis", p.251)*</div>
We can aggregate in Pandas using the aggregate (or agg, for short) method:
End of explanation
cdystonia_grouped.mean().head()
Explanation: Notice that the treat and sex variables are not included in the aggregation. Since it does not make sense to aggregate non-string variables, these columns are simply ignored by the method.
Some aggregation functions are so common that Pandas has a convenience method for them, such as mean:
End of explanation
cdystonia_grouped.mean().add_suffix('_mean').head()
# The median of the `twstrs` variable
cdystonia_grouped['twstrs'].quantile(0.5)
Explanation: The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:
End of explanation
cdystonia.groupby(['week','site']).mean().head()
Explanation: If we wish, we can easily aggregate according to multiple keys:
End of explanation
normalize = lambda x: (x - x.mean())/x.std()
cdystonia_grouped.transform(normalize).head()
Explanation: Alternately, we can transform the data, using a function of our choice with the transform method:
End of explanation
cdystonia_grouped['twstrs'].mean().head()
# This gives the same result as a DataFrame
cdystonia_grouped[['twstrs']].mean().head()
Explanation: It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:
End of explanation
chunks = dict(list(cdystonia_grouped))
chunks[4]
Explanation: If you simply want to divide your DataFrame into chunks for later use, its easy to convert them into a dict so that they can be easily indexed out as needed:
End of explanation
grouped_by_type = cdystonia.groupby(cdystonia.dtypes, axis=1)
{g:grouped_by_type.get_group(g) for g in grouped_by_type.groups}
Explanation: By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:
End of explanation
cdystonia2.head(10)
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
Explanation: Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:
End of explanation
def top(df, column, n=5):
return df.sort_values(by=column, ascending=False)[:n]
Explanation: Apply
We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame.
The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.
End of explanation
top3segments = segments_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']]
top3segments.head(15)
Explanation: To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:
End of explanation
mb1.index[:3]
Explanation: Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument.
Recall the microbiome data sets that we used previously for the concatenation example. Suppose that we wish to aggregate the data at a higher biological classification than genus. For example, we can identify samples down to class, which is the 3rd level of organization in each index.
End of explanation
class_index = mb1.index.map(lambda x: ' '.join(x.split(' ')[:3]))
mb_class = mb1.copy()
mb_class.index = class_index
Explanation: Using the string methods split and join we can create an index that just uses the first three classifications: domain, phylum and class.
End of explanation
mb_class.head()
Explanation: However, since there are multiple taxonomic units with the same class, our index is no longer unique:
End of explanation
mb_class.groupby(level=0).sum().head(10)
Explanation: We can re-establish a unique index by summing all rows with the same class, using groupby:
End of explanation
from IPython.core.display import HTML
HTML(filename='Data/titanic.html')
#import titanic data file
titanic = pd.read_excel("Data/titanic.xls", index_col=None)
titanic.head()
# turn "sex" attribute into numerical attribute
# 0 = male ; 1= female
sex_map = {'male': 0, 'female': 1}
titanic['sex'] = titanic.sex.map(sex_map)
titanic.head()
# clean duplicate values
titanic_2 = titanic.drop_duplicates(['name'])
# convert attributes to categorical data
pd.Categorical(titanic_2.pclass)
pd.Categorical(titanic_2.survived)
pd.Categorical(titanic_2.sex)
pd.Categorical(titanic_2.age)
pd.Categorical(titanic_2.sibsp)
pd.Categorical(titanic_2.parch)
pd.Categorical(titanic_2.ticket)
pd.Categorical(titanic_2.fare)
pd.Categorical(titanic_2.cabin)
pd.Categorical(titanic_2.embarked)
pd.Categorical(titanic_2.boat)
pd.Categorical(titanic_2.body)
titanic_2
# describe passenger class
pclasses = titanic_2.pclass.value_counts()
class1 = (pclasses[1]/1307)*100
class2 = (pclasses[2]/1307)*100
class3 = (pclasses[3]/1307)*100
d = {'1st Class' : class1, '2nd Class' : class2, '3rdclass' : class3}
pd.Series(d)
#24% of passengers travelled 1st class, 21% travelled in 2nd class and 54% travelled in 3rd class
# plot classes 1 = 1st 2 = 2nd and 3 = 3rd
pclasses.plot.pie()
# describe passenger survival
survivals = titanic_2.survived.value_counts()
survived = (survivals[1]/1307)*100
survived
# 38.25% of passengers survived
# plot survivals 0 = death & 1 = survival
survivals.plot.pie()
# describe passenger sex
sex = titanic_2.sex.value_counts()
sex
male_ratio = (sex[1]/1307)*100
male_ratio
# results show that 35% of passengers are male and 65% are female
# plot gender distribution 0 = male & 1 = female
sex.plot.pie()
# calculate proportions of port of embarcation S = Southtampton & C = Cherbourg & Q = Queenstown
port = titanic_2.embarked.value_counts()
S = (port[0]/1307)*100
C = (port[1]/1307)*100
Q = (port[2]/1307)*100
d = {'S' : S, 'C' : C, 'Q' : Q}
pd.Series(d)
# 20.6% of passengers boarded in C, 9.4% boarded in Q and 69.7% boarded in S.
# plot gender distribution 0 = male & 1 = female
port.plot.pie()
# describe passenger age
# assumption - dropping all NaN values and including values of estimated ages
titanic_2age = titanic_2.age.dropna()
titanic_2age.describe()
# results show that mean age was 29.86 y.o.
# min age was 0.16y.o. and max was 80 y.o.
# 25% of passengers under 21, 50% under 28, 75% under 39 y.o.
# show distribution of ages on board
titanic_2age.plot.hist(bins=50)
# describe passenger fare
# assumption - dropping all NaN values
titanic_2fare = titanic_2.fare.dropna()
titanic_2fare.describe()
# results show that mean fare was 33
# min fare was 0 and max was 512
# 25% of passengers paid under 7.9, 50% under 14.5, 75% under 31.27
# show distribution of fares on board
titanic_2fare.plot.hist(bins=50)
# majority of fares under 100 with few outliers
# description of statistics on # of siblings and spouses on board
# assumption - dropping all NaN values and include values which are 0
titanic_2sibsp = titanic_2.sibsp.dropna()
titanic_2sibsp.describe()
# results show that mean # of sibsp was 0.49 siblings or spouses aboard
# min number of siblings or spouses was 0 and max was 8
# 75% of passengers had less than 1 sibling or spouse aboard, indicating outliers above 1
# show distribution of # of siblings and spouses on board
titanic_2sibsp.plot.hist(bins=50)
# description of statistics on # of parents and children on board
# assumption - dropping all NaN values and include values which are 0
titanic_2parch = titanic_2.parch.dropna()
titanic_2parch.describe()
# results show that mean # of parch was 0.38 parents or children aboard
# min number of parents or children was 0 and max was 9
# 75% of passengers had less or equal to 0 parents or children aboard, indicating many outliers in the data
# show distribution of # of siblings and spouses on board
titanic_2parch.plot.hist(bins=50)
Explanation: Exercise 2
Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
End of explanation
# Part 2
# Using Groupby to find ratio of survival by sex
sex_survival = titanic.groupby(titanic.survived).sex.value_counts()
sex_survival
# survivers gender profile calculation
surv_tot = sex_survival[1].sum() # calculate total number of survivors
fem_surv = (sex_survival[1,1]/surv_tot)*100 # calculate proportion of survived females
male_surv = (sex_survival[1,0]/surv_tot)*100 # calculate proportion of survived males
out2 = {'Male Survivors' : male_surv , 'Female Survivors' : fem_surv,} # display outputs simultaneously
pd.Series(out2)
# 67.8% of survivors were female and 32.2% were male
# Part 3
# Using Groupby to find ratio of survival by sex and class
# table outputs raw numbers, but not proportions
sex_class = titanic_2.groupby(['survived','sex']).pclass.value_counts()
sex_class
# survivers gender + class profile calculation
data = pd.DataFrame(sex_class) # turn into data set
surv_tot = sex_class[1].sum() # calculate total number of survivors
data['proportion of survived'] = (data/nsurv_tot)*100 #add column of proportion of survivors
# this column refers to the percentage of people that survived/ did not survived that belong to each category (e.g. percntage of non survivors that were females in second class)
data.loc[1]
# the table below only shows proportions of different categories of people among survivors
# Part 4
# Create Age Categories
# Assumption: Dropped all NaNs
age_group = pd.cut(titanic_2.age, [0,14,20,64,100], labels=['children','adolescents','adult','seniors']) # create age categories
titanic_2['age_group'] = age_group #add column of age group to main dataframe
sex_class_age = titanic_2.groupby(['survived','sex', 'pclass']).age_group.value_counts() #find counts for different combinations of age group, sex and class
sex_class_age
# survivers gender + class + age group profile calculation
data = pd.DataFrame(sex_class_age) # turn into data set
surv_tot = sex_class_age[1].sum() # calculate total number of survivors
data['proportion of survivors'] = (data/surv_tot)*100 #add column of proportion
# this column refers to the percentage of people that survived/ did not survive that belong to each category (e.g. percntage of survivors that were old males in first class
data.loc[1]
# the table below shows proportions of survivals belonging to different categories
Explanation: Women and children first?
Describe each attribute, both with basic statistics and plots. State clearly your assumptions and discuss your findings.
Use the groupby method to calculate the proportion of passengers that survived by sex.
Calculate the same proportion, but by class and sex.
Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.
End of explanation |
13,909 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook demonstrates how to leverage transfer learning to use your own image dataset to build and train an image classification model using MXNet and Amazon SageMaker.
We use, as an example, the creation of a trash classification model which, given some image, classifies it into one of three classes
Step1: Amazon S3 bucket info
Enter your Amazon S3 Bucket name where your data will be stored, make sure that your SageMaker notebook has access to this S3 Bucket by granting S3FullAccess in the SageMaker role attached to this instance. See here for more info.
DeepLens-compatible buckets must start with deeplens
Step2: We are going to check if we have the right bucket and if we have the right permissions.
Please make sure that the result from this cell is "Bucket access is Ok"
Step3: Prepare data
It is assumed that your custom dataset's images are present in an S3 bucket and that different classes are separated by named folders, as shown in the following directory structure
Step4: Ensure that the newly created directories containing the downloaded data are structured as shown at the beginning of this tutorial.
Step5: Prepare "list" files with train-val split
The image classification algorithm can take two types of input formats. The first is a RecordIO format (content type
Step6: Save lst files to S3
Training models is easy with Amazon SageMaker. When you’re ready to train in SageMaker, simply specify the location of your data in Amazon S3, and indicate the type and quantity of SageMaker ML instances you need. SageMaker sets up a distributed compute cluster, performs the training, outputs the result to Amazon S3, and tears down the cluster when complete.
To use Amazon Sagemaker training we must first transfer our input data to Amazon S3.
Step7: Retrieve dataset size
Let's see the size of train, validation and test datasets
Step8: This marks the end of the data preparation phase.
Train the model
Training a good model from scratch can take a long time. Fortunately, we're able to use transfer learning to fine-tune a model that has been trained on millions of images. Transfer learning allows us to train a model to recognize new classes in minutes instead of hours or days that it would normally take to train the model from scratch. Transfer learning requires a lot less data to train a model than from scratch (hundreds instead of tens of thousands).
Fine-tuning the Image Classification Model
Now that we are done with all the setup that is needed, we are ready to train our trash detector. To begin, let us create a sageMaker.estimator.Estimator object. This estimator will launch the training job.
Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include
Step9: Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are
Step10: Input data specification
Set the data type and channels used for training
Step11: Start the training
Start training by calling the fit method in the estimator
Step12: The output from the above command will have the model accuracy and the time it took to run the training.
You can also view these details by navigating to Training -> Training Jobs -> job_name -> View logs in the Amazon SageMaker console
The model trained above can now be found in the s3
Step13: Deploy to a Sagemaker endpoint
After training your model is complete, you can test your model by asking it to predict the class of a sample trash image that the model has not seen before. This step is called inference.
Amazon SageMaker provides an HTTPS endpoint where your machine learning model is available to provide inferences. For more information see the Amazon SageMaker documentation.
Step14: Test the images against the endpoint
We will use the test images that were kept aside for testing.
Step15: Display confusion matrix showing 'true' and 'predicted' labels
A confusion matrix is a table that is often used to describe the performance of a classification model (or "classifier") on a set of test data for which the true values are known. It's a table with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table). The diagonal values in the table indicate a match between the predicted class and the actual class.
For more details go to Confusion matrix (Wikipedia)
Step16: Approximate costs
As of 03/11/2020 and based on the pricing information displayed on the page
Step17: Rename model to deploy to AWS DeepLens
The MxNet model that is stored in the S3 bucket contains 2 files | Python Code:
import os
import urllib.request
import boto3, botocore
import sagemaker
from sagemaker import get_execution_role
import mxnet as mx
mxnet_path = mx.__file__[ : mx.__file__.rfind('/')]
print(mxnet_path)
role = get_execution_role()
print(role)
sess = sagemaker.Session()
Explanation: This notebook demonstrates how to leverage transfer learning to use your own image dataset to build and train an image classification model using MXNet and Amazon SageMaker.
We use, as an example, the creation of a trash classification model which, given some image, classifies it into one of three classes: compost, landfill, recycle. This is based on the Show Before You Throw project from an AWS DeepLens hackathon and the Smart Recycle Arm project presented at the AWS Public Sector Summit 2019
Prerequisites
Download Data
Fine-tuning the Image Classification Model
Start the Training
Test your Model
Deploy your Model to AWS DeepLens
Prequisites
Amazon Sagemaker notebook should have internet access to download images needed for testing this notebook. This is turned ON by default. To explore aoptions review this link : Sagemaker routing options
The IAM role assigned to this notebook should have permissions to create a bucket (if it does not exist)
IAM role for Amazon Sagemaker
S3 create bucket permissions
Permissions and environment variables
Here we set up the linkage and authentication to AWS services. There are 2 parts to this:
The roles used to give learning and hosting access to your data. This will automatically be obtained from the role used to start the notebook
The Amazon sagemaker image classification docker image which need not be changed
End of explanation
BUCKET = 'deeplens-<Your-Test-Bucket>'
PREFIX = 'deeplens-trash-test'
from sagemaker.amazon.amazon_estimator import get_image_uri
training_image = get_image_uri(sess.boto_region_name, 'image-classification', repo_version="latest")
print (training_image)
Explanation: Amazon S3 bucket info
Enter your Amazon S3 Bucket name where your data will be stored, make sure that your SageMaker notebook has access to this S3 Bucket by granting S3FullAccess in the SageMaker role attached to this instance. See here for more info.
DeepLens-compatible buckets must start with deeplens
End of explanation
test_data = 'TestData'
s3 = boto3.resource('s3')
object = s3.Object(BUCKET, PREFIX+"/test.txt")
try:
object.put(Body=test_data)
except botocore.exceptions.ClientError as e:
if e.response['Error']['Code'] == "AccessDenied":
#cannot write on the bucket
print("Bucket "+BUCKET+"is not writeable, make sure you have the right permissions")
else:
if e.response['Error']['Code'] == "NoSuchBucket":
#Bucket does not exist
print("Bucket"+BUCKET+" does not exist")
else:
raise
else:
print("Bucket access is Ok")
object.delete(BUCKET, PREFIX+"/test.txt")
Explanation: We are going to check if we have the right bucket and if we have the right permissions.
Please make sure that the result from this cell is "Bucket access is Ok"
End of explanation
!wget https://deeplens-public.s3.amazonaws.com/samples/deeplens-trash/trash-images.zip
!rm -rf data/ && mkdir -p data
!mkdir -p data/images
!unzip -qq trash-images.zip -d data/images
!rm trash-images.zip
import matplotlib.pyplot as plt
%matplotlib inline
def show_images(item_name, images_to_show=-1):
_im_list = !ls $IMAGES_DIR/$item_name
NUM_COLS = 3
if images_to_show == -1:
IM_COUNT = len(_im_list)
else:
IM_COUNT = images_to_show
print('Displaying images category ' + item_name + ' count: ' + str(IM_COUNT) + ' images.')
NUM_ROWS = int(IM_COUNT / NUM_COLS)
if ((IM_COUNT % NUM_COLS) > 0):
NUM_ROWS += 1
fig, axarr = plt.subplots(NUM_ROWS, NUM_COLS)
fig.set_size_inches(10.0, 10.0, forward=True)
curr_row = 0
for curr_img in range(IM_COUNT):
# fetch the url as a file type object, then read the image
f = IMAGES_DIR + item_name + '/' + _im_list[curr_img]
a = plt.imread(f)
# find the column by taking the current index modulo 3
col = curr_img % NUM_ROWS
# plot on relevant subplot
if NUM_ROWS == 1:
axarr[curr_row].imshow(a)
else:
axarr[col, curr_row].imshow(a)
if col == (NUM_ROWS - 1):
# we have finished the current row, so increment row counter
curr_row += 1
fig.tight_layout()
plt.show()
# Clean up
plt.clf()
plt.cla()
plt.close()
IMAGES_DIR = 'data/images/'
show_images("Compost", images_to_show=3)
show_images("Landfill", images_to_show=3)
show_images("Recycling", images_to_show=3)
DEST_BUCKET = 's3://'+BUCKET+'/'+PREFIX+'/images/'
!aws s3 cp --recursive data/images $DEST_BUCKET --quiet
Explanation: Prepare data
It is assumed that your custom dataset's images are present in an S3 bucket and that different classes are separated by named folders, as shown in the following directory structure:
```
|-deeplens-bucket
|-deeplens-trash
|-images
|-Compost
|-Landfill
|-Recycle
```
Since we are providing the data for you in this example, first we'll download the example data, unzip it and upload it to your bucket.
End of explanation
!aws s3 ls $DEST_BUCKET
Explanation: Ensure that the newly created directories containing the downloaded data are structured as shown at the beginning of this tutorial.
End of explanation
!python $mxnet_path/tools/im2rec.py --list --recursive --test-ratio=0.02 --train-ratio 0.7 trash data/images
Explanation: Prepare "list" files with train-val split
The image classification algorithm can take two types of input formats. The first is a RecordIO format (content type: application/x-recordio) and the other is a Image list format (.lst file). These file formats allows for efficient loading of images when training the model. In this example we will be using the Image list format (.lst file). A .lst file is a tab-separated file with three columns that contains a list of image files. The first column specifies the image index, the second column specifies the class label index for the image, and the third column specifies the relative path of the image file. The RecordIO file contains the actual pixel data for the images.
To be able to create the .rec files, we first need to split the data into training and validation sets (after shuffling) and create two list files for each. Here our split into train, validation and test (specified by the 0.7 parameter below for test). We keep 0.02% to test the model.
The image and lst files will be converted to RecordIO file internally by the image classification algorithm. But if you want do the conversion, the following cell shows how to do it using the im2rec tool. Note that this is just an example of creating RecordIO files. We are not using them for training in this notebook.
End of explanation
s3train_lst = 's3://{}/{}/train_lst/'.format(BUCKET, PREFIX)
s3validation_lst = 's3://{}/{}/validation_lst/'.format(BUCKET, PREFIX)
# upload the lst files to train_lst and validation_lst channels
!aws s3 cp trash_train.lst $s3train_lst --quiet
!aws s3 cp trash_val.lst $s3validation_lst --quiet
Explanation: Save lst files to S3
Training models is easy with Amazon SageMaker. When you’re ready to train in SageMaker, simply specify the location of your data in Amazon S3, and indicate the type and quantity of SageMaker ML instances you need. SageMaker sets up a distributed compute cluster, performs the training, outputs the result to Amazon S3, and tears down the cluster when complete.
To use Amazon Sagemaker training we must first transfer our input data to Amazon S3.
End of explanation
f = open('trash_train.lst', 'r')
train_samples = sum(1 for line in f)
f.close()
f = open('trash_val.lst', 'r')
val_samples = sum(1 for line in f)
f.close()
f = open('trash_test.lst', 'r')
test_samples = sum(1 for line in f)
f.close()
print('train_samples:', train_samples)
print('val_samples:', val_samples)
print('test_samples:', test_samples)
Explanation: Retrieve dataset size
Let's see the size of train, validation and test datasets
End of explanation
s3_output_location = 's3://{}/{}/output'.format(BUCKET, PREFIX)
ic = sagemaker.estimator.Estimator(training_image,
role,
train_instance_count=1,
train_instance_type='ml.p2.xlarge',
train_volume_size = 50,
train_max_run = 360000,
input_mode= 'File',
output_path=s3_output_location,
sagemaker_session=sess,
base_job_name='ic-trash')
Explanation: This marks the end of the data preparation phase.
Train the model
Training a good model from scratch can take a long time. Fortunately, we're able to use transfer learning to fine-tune a model that has been trained on millions of images. Transfer learning allows us to train a model to recognize new classes in minutes instead of hours or days that it would normally take to train the model from scratch. Transfer learning requires a lot less data to train a model than from scratch (hundreds instead of tens of thousands).
Fine-tuning the Image Classification Model
Now that we are done with all the setup that is needed, we are ready to train our trash detector. To begin, let us create a sageMaker.estimator.Estimator object. This estimator will launch the training job.
Training parameters
There are two kinds of parameters that need to be set for training. The first one are the parameters for the training job. These include:
Training instance count: This is the number of instances on which to run the training. When the number of instances is greater than one, then the image classification algorithm will run in distributed settings.
Training instance type: This indicates the type of machine on which to run the training. Typically, we use GPU instances for these training
Output path: This the s3 folder in which the training output is stored
End of explanation
ic.set_hyperparameters(num_layers=18,
use_pretrained_model=1,
image_shape = "3,224,224",
num_classes=3,
mini_batch_size=128,
epochs=10,
learning_rate=0.01,
top_k=2,
num_training_samples=train_samples,
resize = 224,
precision_dtype='float32')
Explanation: Apart from the above set of parameters, there are hyperparameters that are specific to the algorithm. These are:
num_layers: The number of layers (depth) for the network. We use 18 in this samples but other values such as 50, 152 can be used.
use_pretrained_model: Set to 1 to use pretrained model for transfer learning.
image_shape: The input image dimensions,'num_channels, height, width', for the network. It should be no larger than the actual image size. The number of channels should be same as the actual image.
num_classes: This is the number of output classes for the new dataset. For us, we have
num_training_samples: This is the total number of training samples. It is set to 15240 for caltech dataset with the current split.
mini_batch_size: The number of training samples used for each mini batch. In distributed training, the number of training samples used per batch will be N * mini_batch_size where N is the number of hosts on which training is run.
epochs: Number of training epochs.
learning_rate: Learning rate for training.
top_k: Report the top-k accuracy during training.
resize: Resize the image before using it for training. The images are resized so that the shortest side is of this parameter. If the parameter is not set, then the training data is used as such without resizing.
precision_dtype: Training datatype precision (default: float32). If set to 'float16', the training will be done in mixed_precision mode and will be faster than float32 mode
End of explanation
s3images = 's3://{}/{}/images/'.format(BUCKET, PREFIX)
train_data = sagemaker.session.s3_input(s3images, distribution='FullyReplicated',
content_type='application/x-image', s3_data_type='S3Prefix')
validation_data = sagemaker.session.s3_input(s3images, distribution='FullyReplicated',
content_type='application/x-image', s3_data_type='S3Prefix')
train_data_lst = sagemaker.session.s3_input(s3train_lst, distribution='FullyReplicated',
content_type='application/x-image', s3_data_type='S3Prefix')
validation_data_lst = sagemaker.session.s3_input(s3validation_lst, distribution='FullyReplicated',
content_type='application/x-image', s3_data_type='S3Prefix')
data_channels = {'train': train_data, 'validation': validation_data,
'train_lst': train_data_lst, 'validation_lst': validation_data_lst}
Explanation: Input data specification
Set the data type and channels used for training
End of explanation
ic.fit(inputs=data_channels, logs=True)
Explanation: Start the training
Start training by calling the fit method in the estimator
End of explanation
MODEL_PATH = ic.model_data
print(MODEL_PATH)
Explanation: The output from the above command will have the model accuracy and the time it took to run the training.
You can also view these details by navigating to Training -> Training Jobs -> job_name -> View logs in the Amazon SageMaker console
The model trained above can now be found in the s3://<YOUR_BUCKET>/<PREFIX>/output path.
End of explanation
ic_infer = ic.deploy(initial_instance_count=1, instance_type='local')
Explanation: Deploy to a Sagemaker endpoint
After training your model is complete, you can test your model by asking it to predict the class of a sample trash image that the model has not seen before. This step is called inference.
Amazon SageMaker provides an HTTPS endpoint where your machine learning model is available to provide inferences. For more information see the Amazon SageMaker documentation.
End of explanation
object_categories = ['Compost', 'Landfill', 'Recycling']
from IPython.display import Image, display
import json
import numpy as np
def test_model():
preds = []
acts = []
num_errors = 0
with open('trash_test.lst', 'r') as f:
for line in f:
stripped_line = str(line.strip()).split("\t")
file_path = stripped_line[2]
category = int(float(stripped_line[1]))
with open(IMAGES_DIR + stripped_line[2], 'rb') as f:
payload = f.read()
payload = bytearray(payload)
ic_infer.content_type = 'application/x-image'
result = json.loads(ic_infer.predict(payload))
# the result will output the probabilities for all classes
# find the class with maximum probability and print the class index
index = np.argmax(result)
act = object_categories[category]
pred = object_categories[index]
conf = result[index]
print("Result: Predicted: {}, Confidence: {:.2f}, Actual: {} ".format(pred, conf, act))
acts.append(category)
preds.append(index)
if (pred != act):
num_errors += 1
print('ERROR on image -- Predicted: {}, Confidence: {:.2f}, Actual: {}'.format(pred, conf, act))
display(Image(filename=IMAGES_DIR + stripped_line[2], width=100, height=100))
return num_errors, preds, acts
num_errors, preds, acts = test_model()
from sklearn.metrics import confusion_matrix
import numpy as np
import itertools
COLOR = 'green'
plt.rcParams['text.color'] = COLOR
plt.rcParams['axes.labelcolor'] = COLOR
plt.rcParams['xtick.color'] = COLOR
plt.rcParams['ytick.color'] = COLOR
def plot_confusion_matrix(cm, classes,
class_name_list,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.GnBu):
plt.figure(figsize=(7,7))
plt.grid(False)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]),
range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.gca().set_xticklabels(class_name_list)
plt.gca().set_yticklabels(class_name_list)
plt.ylabel('True label')
plt.xlabel('Predicted label')
def create_and_plot_confusion_matrix(actual, predicted):
cnf_matrix = confusion_matrix(actual, np.asarray(predicted),labels=range(len(object_categories)))
plot_confusion_matrix(cnf_matrix, classes=range(len(object_categories)), class_name_list=object_categories)
Explanation: Test the images against the endpoint
We will use the test images that were kept aside for testing.
End of explanation
create_and_plot_confusion_matrix(acts, preds)
Explanation: Display confusion matrix showing 'true' and 'predicted' labels
A confusion matrix is a table that is often used to describe the performance of a classification model (or "classifier") on a set of test data for which the true values are known. It's a table with two dimensions ("actual" and "predicted"), and identical sets of "classes" in both dimensions (each combination of dimension and class is a variable in the contingency table). The diagonal values in the table indicate a match between the predicted class and the actual class.
For more details go to Confusion matrix (Wikipedia)
End of explanation
sess.delete_endpoint(ic_infer.endpoint)
print("Completed")
Explanation: Approximate costs
As of 03/11/2020 and based on the pricing information displayed on the page: https://aws.amazon.com/sagemaker/pricing/, here's the costs you can expect in a 24 hour period:
Notebook instance cost \$6 Assuming you choose ml.t3.xlarge (\$0.233/hour) instance. This can vary based on the size of instance you choose.
Training costs \$1.05 : Assuming you will run about 10 training runs in a 24 hour period using the sample dataset provided. The notebook uses a p2.xlarge (\$1.26/hour) instance
Model hosting \$6.72 : Assuming you use the ml.m4.xlarge (\$0.28/hour) instance running for 24 hours.
NOTE : To save on costs, stop your notebook instances and delete the model edpoint when not in use
(Optional) Clean-up
If you're ready to be done with this notebook, please run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on.
End of explanation
import glob
!rm -rf data/$PREFIX/tmp && mkdir -p data/$PREFIX/tmp
!aws s3 cp $MODEL_PATH data/$PREFIX/tmp
!tar -xzvf data/$PREFIX/tmp/model.tar.gz -C data/$PREFIX/tmp
params_file_name = glob.glob('./data/' + PREFIX + '/tmp/*.params')[0]
!mv $params_file_name data/$PREFIX/tmp/image-classification-0000.params
!tar -cvzf ./model.tar.gz -C data/$PREFIX/tmp ./image-classification-0000.params ./image-classification-symbol.json
!aws s3 cp model.tar.gz $MODEL_PATH
Explanation: Rename model to deploy to AWS DeepLens
The MxNet model that is stored in the S3 bucket contains 2 files: the params file and a symbol.json file. To simplify deployment to AWS DeepLens, we'll modify the params file so that you do not need to specify the number of epochs the model was trained for.
End of explanation |
13,910 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
```
Read data
with open("Atmosfera-Incidents-2017.pickle", 'rb') as f
Step1: W tej wersji eksperymentu, Y zawiera root_service - 44 unikalne kategorie główne.
Zamieńmy je na liczby z przedziału 0-43
Step2: slice in half even/odds to nulify time differencies
X_train=X[0 | Python Code:
# Dane wejściowe
with open("X-sequences.pickle", 'rb') as f:
X = pickle.load(f)
with open("Y.pickle", 'rb') as f:
Y = pickle.load(f)
# Zostaw tylko poniższe kategorie, pozostale zmień na -1
lista = [2183,
#325,
37, 859, 2655, 606, 412, 2729, 1683, 1305]
# Y=[y if y in lista else -1 for y in Y]
mask = [y in lista for y in Y]
import itertools
X = np.array(list(itertools.compress(X, mask)))
Y = np.array(list(itertools.compress(Y, mask)))
np.unique(Y)
Explanation: ```
Read data
with open("Atmosfera-Incidents-2017.pickle", 'rb') as f:
incidents = pickle.load(f)
Skonwertuj root_service do intów i zapisz
Y=[int(i) for i in incidents[1:,3]]
with open("Y.pickle", 'wb') as f:
pickle.dump(Y, f, pickle.HIGHEST_PROTOCOL)
```
End of explanation
root_services=np.sort(np.unique(Y))
# skonstruuj odwrtotny indeks kategorii głównych
services_idx={root_services[i]: i for i in range(len(root_services))}
# Zamień
Y=[services_idx[y] for y in Y]
Y=to_categorical(Y)
Y.shape
top_words = 5000
classes=Y[0,].shape[0]
print(classes)
# max_length (98th percentile is 476), padd the rest
max_length=500
X=sequence.pad_sequences(X, maxlen=max_length)
Explanation: W tej wersji eksperymentu, Y zawiera root_service - 44 unikalne kategorie główne.
Zamieńmy je na liczby z przedziału 0-43
End of explanation
# create the model
embedding_vecor_length = 60
_input = Input(shape=(max_length,), name='input')
embedding=Embedding(top_words, embedding_vecor_length, input_length=max_length)(_input)
conv1 = Conv1D(filters=128, kernel_size=1, padding='same', activation='relu')
conv2 = Conv1D(filters=128, kernel_size=2, padding='same', activation='relu')
conv3 = Conv1D(filters=128, kernel_size=3, padding='same', activation='relu')
conv4 = Conv1D(filters=128, kernel_size=4, padding='same', activation='relu')
conv5 = Conv1D(filters=32, kernel_size=5, padding='same', activation='relu')
conv6 = Conv1D(filters=32, kernel_size=6, padding='same', activation='relu')
conv1 = conv1(embedding)
glob1 = GlobalAveragePooling1D()(conv1)
conv2 = conv2(embedding)
glob2 = GlobalAveragePooling1D()(conv2)
conv3 = conv3(embedding)
glob3 = GlobalAveragePooling1D()(conv3)
conv4 = conv4(embedding)
glob4 = GlobalAveragePooling1D()(conv4)
conv5 = conv5(embedding)
glob5 = GlobalAveragePooling1D()(conv5)
conv6 = conv6(embedding)
glob6 = GlobalAveragePooling1D()(conv6)
merge = concatenate([glob1, glob2, glob3, glob4, glob5, glob6])
x = Dropout(0.2)(merge)
x = BatchNormalization()(x)
x = Dense(300, activation='relu')(x)
x = Dropout(0.2)(x)
x = BatchNormalization()(x)
pred = Dense(classes, activation='softmax')(x)
model = Model(inputs=[_input], outputs=pred)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])#, decay=0.0000001)
print(model.summary())
# Callbacks
early_stop_cb = EarlyStopping(monitor='val_loss', patience=20, verbose=1)
checkpoit_cb = ModelCheckpoint(NAME+".h5", save_best_only=True)
# Print the batch number at the beginning of every batch.
batch_print_cb = LambdaCallback(on_batch_begin=lambda batch, logs: print (".",end=''),
on_epoch_end=lambda batch, logs: print (batch))
# Plot the loss after every epoch.
plot_loss_cb = LambdaCallback(on_epoch_end=lambda epoch, logs:
print (epoch, logs))
#plt.plot(np.arange(epoch), logs['loss']))
print("done")
history = model.fit(
X,#_train,
Y,#_train,
# initial_epoch=1200,
epochs=1500,
batch_size=2048,
#validation_data=(X_valid,Y_valid),
validation_split=0.25,
callbacks=[early_stop_cb, checkpoit_cb, batch_print_cb, plot_loss_cb],
verbose=0
)
#history=model.fit(X_train, Y_train, validation_data=(X_test, Y_test), nb_epoch=3, batch_size=512)
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper right')
# plt.title('model loss (log scale)')
# plt.yscale('log')
plt.show()
history2 = model.fit(
X,#_train,
Y,#_train,
initial_epoch=10000,
epochs=10010,
batch_size=1024,
#validation_data=(X_valid,Y_valid),
validation_split=0.1,
callbacks=[early_stop_cb, checkpoit_cb, batch_print_cb, plot_loss_cb],
verbose=0
)
score=model.evaluate(X_test,Y_test, verbose=0)
print("OOS %s: %.2f%%" % (model.metrics_names[1], score[1]*100))
print("OOS %s: %.2f" % (model.metrics_names[0], score[0]))
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history2.history['acc'])
plt.plot(history2.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history2.history['loss'])
plt.plot(history2.history['val_loss'])
plt.title('model loss (log scale)')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.yscale('log')
plt.show()
history3 = model.fit(
X,#_train,
Y,#_train,
initial_epoch=60,
epochs=90,
batch_size=1024,
#validation_data=(X_valid,Y_valid),
validation_split=0.3,
callbacks=[early_stop_cb, checkpoit_cb, batch_print_cb, plot_loss_cb],
verbose=0
)
import matplotlib.pyplot as plt
# summarize history for accuracy
plt.plot(history3.history['acc'])
plt.plot(history3.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history3.history['loss'])
plt.plot(history3.history['val_loss'])
plt.title('model loss (log scale)')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper right')
plt.yscale('log')
plt.show()
Explanation: slice in half even/odds to nulify time differencies
X_train=X[0:][::2] # even
X_test=X[1:][::2] # odds
Y_train=np.array(Y[0:][::2]) # even
Y_test=np.array(Y[1:][::2]) # odds
if split_valid_test:
# Split "test" in half for validation and final testing
X_valid=X_test[:len(X_test)/2]
Y_valid=Y_test[:len(Y_test)/2]
X_test=X_test[len(X_test)/2:]
Y_test=Y_test[len(Y_test)/2:]
else:
X_valid=X_test
Y_valid=Y_test
End of explanation |
13,911 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fitting Models Exercise 2
Imports
Step1: Fitting a decaying oscillation
For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays
Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 2
Imports
End of explanation
# YOUR CODE HERE
data = np.load("decay_osc.npz")
t = data["tdata"]
y = data["ydata"]
dy = data["dy"]
plt.errorbar(t, y, dy, fmt=".b")
assert True # leave this to grade the data import and raw data plot
Explanation: Fitting a decaying oscillation
For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays:
tdata: an array of time values
ydata: an array of y values
dy: the absolute uncertainties (standard deviations) in y
Your job is to fit the following model to this data:
$$ y(t) = A e^{-\lambda t} \cos{\omega t + \delta} $$
First, import the data using NumPy and make an appropriately styled error bar plot of the raw data.
End of explanation
# YOUR CODE HERE
def model(t, A, lambd, omega, sigma):
return A*np.exp(-lambd * t) * np.cos(omega*t) + sigma
theta_best, theta_cov = opt.curve_fit(model, t, y, sigma=dy)
print("A = ", theta_best[0], " +- ", theta_cov[0,0])
print("lambda = ", theta_best[1], " +- ", theta_cov[1,1])
print("omega = ", theta_best[2], " +- ", theta_cov[2,2])
print("sigma = ", theta_best[3], " +- ", theta_cov[3,3])
fitline = model(t, theta_best[0], theta_best[1], theta_best[2], theta_best[3])
plt.errorbar(t, y, dy, fmt=".b")
plt.plot(t, fitline, color="r")
plt.xlabel("t")
plt.ylabel("y")
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters:
Print the parameters estimates and uncertainties.
Plot the raw and best fit model.
You will likely have to pass an initial guess to curve_fit to get a good fit.
Treat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.
End of explanation |
13,912 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CBOE VXXLE Index
In this notebook, we'll take a look at the CBOE VXXLE Index dataset, available on the Quantopian Store. This dataset spans 16 Mar 2011 through the current day. This data has a daily frequency. VXXLE is the CBOE Energy Sector ETF Volatility Index, reflecting the implied volatility of the XLE ETF
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>
Step1: Let's go over the columns
Step2: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows
Step3: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
Step4: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread
Step5: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need.
Taking what we've seen from above, let's see how we'd move that into the backtester. | Python Code:
# For use in Quantopian Research, exploring interactively
from quantopian.interactive.data.quandl import cboe_vxxle as dataset
# import data operations
from odo import odo
# import other libraries we will use
import pandas as pd
# Let's use blaze to understand the data a bit using Blaze dshape()
dataset.dshape
# And how many rows are there?
# N.B. we're using a Blaze function to do this, not len()
dataset.count()
# Let's see what the data looks like. We'll grab the first three rows.
dataset[:3]
Explanation: CBOE VXXLE Index
In this notebook, we'll take a look at the CBOE VXXLE Index dataset, available on the Quantopian Store. This dataset spans 16 Mar 2011 through the current day. This data has a daily frequency. VXXLE is the CBOE Energy Sector ETF Volatility Index, reflecting the implied volatility of the XLE ETF
Notebook Contents
There are two ways to access the data and you'll find both of them listed below. Just click on the section you'd like to read through.
<a href='#interactive'><strong>Interactive overview</strong></a>: This is only available on Research and uses blaze to give you access to large amounts of data. Recommended for exploration and plotting.
<a href='#pipeline'><strong>Pipeline overview</strong></a>: Data is made available through pipeline which is available on both the Research & Backtesting environment. Recommended for custom factor development and moving back & forth between research/backtesting.
Limits
One key caveat: we limit the number of results returned from any given expression to 10,000 to protect against runaway memory usage. To be clear, you have access to all the data server side. We are limiting the size of the responses back from Blaze.
With preamble in place, let's get started:
<a id='interactive'></a>
Interactive Overview
Accessing the data with Blaze and Interactive on Research
Partner datasets are available on Quantopian Research through an API service known as Blaze. Blaze provides the Quantopian user with a convenient interface to access very large datasets, in an interactive, generic manner.
Blaze provides an important function for accessing these datasets. Some of these sets are many millions of records. Bringing that data directly into Quantopian Research directly just is not viable. So Blaze allows us to provide a simple querying interface and shift the burden over to the server side.
It is common to use Blaze to reduce your dataset in size, convert it over to Pandas and then to use Pandas for further computation, manipulation and visualization.
Helpful links:
* Query building for Blaze
* Pandas-to-Blaze dictionary
* SQL-to-Blaze dictionary.
Once you've limited the size of your Blaze object, you can convert it to a Pandas DataFrames using:
from odo import odo
odo(expr, pandas.DataFrame)
To see how this data can be used in your algorithm, search for the Pipeline Overview section of this notebook or head straight to <a href='#pipeline'>Pipeline Overview</a>
End of explanation
# Plotting this DataFrame
df = odo(dataset, pd.DataFrame)
df.head(5)
# So we can plot it, we'll set the index as the `asof_date`
df['asof_date'] = pd.to_datetime(df['asof_date'])
df = df.set_index(['asof_date'])
df.head(5)
import matplotlib.pyplot as plt
df['open_'].plot(label=str(dataset))
plt.ylabel(str(dataset))
plt.legend()
plt.title("Graphing %s since %s" % (str(dataset), min(df.index)))
Explanation: Let's go over the columns:
- open: open price for vxxle
- high: daily high for vxxle
- low: daily low for vxxle
- close: close price for vxxle
- asof_date: the timeframe to which this data applies
- timestamp: this is our timestamp on when we registered the data.
We've done much of the data processing for you. Fields like timestamp are standardized across all our Store Datasets, so the datasets are easy to combine.
We can select columns and rows with ease. Below, we'll do a simple plot.
End of explanation
# Import necessary Pipeline modules
from quantopian.pipeline import Pipeline
from quantopian.research import run_pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# Import the datasets available
from quantopian.pipeline.data.quandl import cboe_vxxle
Explanation: <a id='pipeline'></a>
Pipeline Overview
Accessing the data in your algorithms & research
The only method for accessing partner data within algorithms running on Quantopian is via the pipeline API. Different data sets work differently but in the case of this data, you can add this data to your pipeline as follows:
Import the data set here
from quantopian.pipeline.data.quandl import cboe_vxxle
Then in intialize() you could do something simple like adding the raw value of one of the fields to your pipeline:
pipe.add(cboe_vxxle.open_.latest, 'open')
Pipeline usage is very similar between the backtester and Research so let's go over how to import this data through pipeline and view its outputs.
End of explanation
print "Here are the list of available fields per dataset:"
print "---------------------------------------------------\n"
def _print_fields(dataset):
print "Dataset: %s\n" % dataset.__name__
print "Fields:"
for field in list(dataset.columns):
print "%s - %s" % (field.name, field.dtype)
print "\n"
_print_fields(cboe_vxxle)
print "---------------------------------------------------\n"
Explanation: Now that we've imported the data, let's take a look at which fields are available for each dataset.
You'll find the dataset, the available fields, and the datatypes for each of those fields.
End of explanation
pipe = Pipeline()
pipe.add(cboe_vxxle.open_.latest, 'open_vxxle')
# Setting some basic liquidity strings (just for good habit)
dollar_volume = AverageDollarVolume(window_length=20)
top_1000_most_liquid = dollar_volume.rank(ascending=False) < 1000
pipe.set_screen(top_1000_most_liquid & cboe_vxxle.open_.latest.notnan())
# The show_graph() method of pipeline objects produces a graph to show how it is being calculated.
pipe.show_graph(format='png')
# run_pipeline will show the output of your pipeline
pipe_output = run_pipeline(pipe, start_date='2013-11-01', end_date='2013-11-25')
pipe_output
Explanation: Now that we know what fields we have access to, let's see what this data looks like when we run it through Pipeline.
This is constructed the same way as you would in the backtester. For more information on using Pipeline in Research view this thread:
https://www.quantopian.com/posts/pipeline-in-research-build-test-and-visualize-your-factors-and-filters
End of explanation
# This section is only importable in the backtester
from quantopian.algorithm import attach_pipeline, pipeline_output
# General pipeline imports
from quantopian.pipeline import Pipeline
from quantopian.pipeline.factors import AverageDollarVolume
# For use in your algorithms via the pipeline API
from quantopian.pipeline.data.quandl import cboe_vxxle
def make_pipeline():
# Create our pipeline
pipe = Pipeline()
# Screen out penny stocks and low liquidity securities.
dollar_volume = AverageDollarVolume(window_length=20)
is_liquid = dollar_volume.rank(ascending=False) < 1000
# Create the mask that we will use for our percentile methods.
base_universe = (is_liquid)
# Add the datasets available
pipe.add(cboe_vxxle.open_.latest, 'vxxle_open')
# Set our pipeline screens
pipe.set_screen(is_liquid)
return pipe
def initialize(context):
attach_pipeline(make_pipeline(), "pipeline")
def before_trading_start(context, data):
results = pipeline_output('pipeline')
Explanation: Here, you'll notice that each security is mapped to the corresponding value, so you could grab any security to get what you need.
Taking what we've seen from above, let's see how we'd move that into the backtester.
End of explanation |
13,913 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Mathematical functions
Step1: Trigonometric functions
Q1. Calculate sine, cosine, and tangent of x, element-wise.
Step2: Q2. Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.
Step3: Q3. Convert angles from radians to degrees.
Step4: Q4. Convert angles from degrees to radians.
Step5: Hyperbolic functions
Q5. Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.
Step6: Rounding
Q6. Predict the results of these, paying attention to the difference among the family functions.
Step7: Q7. Implement out5 in the above question using numpy.
Step8: Sums, products, differences
Q8. Predict the results of these.
Step9: Q9. Calculate the difference between neighboring elements, element-wise.
Step10: Q10. Calculate the difference between neighboring elements, element-wise, and
prepend [0, 0] and append[100] to it.
Step11: Q11. Return the cross product of x and y.
Step12: Exponents and logarithms
Q12. Compute $e^x$, element-wise.
Step13: Q13. Calculate exp(x) - 1 for all elements in x.
Step14: Q14. Calculate $2^p$ for all p in x.
Step15: Q15. Compute natural, base 10, and base 2 logarithms of x element-wise.
Step16: Q16. Compute the natural logarithm of one plus each element in x in floating-point accuracy.
Step17: Floating point routines
Q17. Return element-wise True where signbit is set.
Step18: Q18. Change the sign of x to that of y, element-wise.
Step19: Arithmetic operations
Q19. Add x and y element-wise.
Step20: Q20. Subtract y from x element-wise.
Step21: Q21. Multiply x by y element-wise.
Step22: Q22. Divide x by y element-wise in two different ways.
Step23: Q23. Compute numerical negative value of x, element-wise.
Step24: Q24. Compute the reciprocal of x, element-wise.
Step25: Q25. Compute $x^y$, element-wise.
Step26: Q26. Compute the remainder of x / y element-wise in two different ways.
Step27: Miscellaneous
Q27. If an element of x is smaller than 3, replace it with 3.
And if an element of x is bigger than 7, replace it with 7.
Step28: Q28. Compute the square of x, element-wise.
Step29: Q29. Compute square root of x element-wise.
Step30: Q30. Compute the absolute value of x.
Step31: Q31. Compute an element-wise indication of the sign of x, element-wise. | Python Code:
import numpy as np
np.__version__
__author__ = "kyubyong. [email protected]. https://github.com/kyubyong"
Explanation: Mathematical functions
End of explanation
x = np.array([0., 1., 30, 90])
print "sine:", np.sin(x)
print "cosine:", np.cos(x)
print "tangent:", np.tan(x)
Explanation: Trigonometric functions
Q1. Calculate sine, cosine, and tangent of x, element-wise.
End of explanation
x = np.array([-1., 0, 1.])
print "inverse sine:", np.arcsin(x2)
print "inverse cosine:", np.arccos(x2)
print "inverse tangent:", np.arctan(x2)
Explanation: Q2. Calculate inverse sine, inverse cosine, and inverse tangent of x, element-wise.
End of explanation
x = np.array([-np.pi, -np.pi/2, np.pi/2, np.pi])
out1 = np.degrees(x)
out2 = np.rad2deg(x)
assert np.array_equiv(out1, out2)
print out1
Explanation: Q3. Convert angles from radians to degrees.
End of explanation
x = np.array([-180., -90., 90., 180.])
out1 = np.radians(x)
out2 = np.deg2rad(x)
assert np.array_equiv(out1, out2)
print out1
Explanation: Q4. Convert angles from degrees to radians.
End of explanation
x = np.array([-1., 0, 1.])
print np.sinh(x)
print np.cosh(x)
print np.tanh(x)
Explanation: Hyperbolic functions
Q5. Calculate hyperbolic sine, hyperbolic cosine, and hyperbolic tangent of x, element-wise.
End of explanation
x = np.array([2.1, 1.5, 2.5, 2.9, -2.1, -2.5, -2.9])
out1 = np.around(x)
out2 = np.floor(x)
out3 = np.ceil(x)
out4 = np.trunc(x)
out5 = [round(elem) for elem in x]
print out1
print out2
print out3
print out4
print out5
Explanation: Rounding
Q6. Predict the results of these, paying attention to the difference among the family functions.
End of explanation
print np.floor(np.abs(x) + 0.5) * np.sign(x)
# Read http://numpy-discussion.10968.n7.nabble.com/why-numpy-round-get-a-different-result-from-python-round-function-td19098.html
Explanation: Q7. Implement out5 in the above question using numpy.
End of explanation
x = np.array(
[[1, 2, 3, 4],
[5, 6, 7, 8]])
outs = [np.sum(x),
np.sum(x, axis=0),
np.sum(x, axis=1, keepdims=True),
"",
np.prod(x),
np.prod(x, axis=0),
np.prod(x, axis=1, keepdims=True),
"",
np.cumsum(x),
np.cumsum(x, axis=0),
np.cumsum(x, axis=1),
"",
np.cumprod(x),
np.cumprod(x, axis=0),
np.cumprod(x, axis=1),
"",
np.min(x),
np.min(x, axis=0),
np.min(x, axis=1, keepdims=True),
"",
np.max(x),
np.max(x, axis=0),
np.max(x, axis=1, keepdims=True),
"",
np.mean(x),
np.mean(x, axis=0),
np.mean(x, axis=1, keepdims=True)]
for out in outs:
if out == "":
print
else:
print("->", out)
Explanation: Sums, products, differences
Q8. Predict the results of these.
End of explanation
x = np.array([1, 2, 4, 7, 0])
print np.diff(x)
Explanation: Q9. Calculate the difference between neighboring elements, element-wise.
End of explanation
x = np.array([1, 2, 4, 7, 0])
out1 = np.ediff1d(x, to_begin=[0, 0], to_end=[100])
out2 = np.insert(np.append(np.diff(x), 100), 0, [0, 0])
assert np.array_equiv(out1, out2)
print out2
Explanation: Q10. Calculate the difference between neighboring elements, element-wise, and
prepend [0, 0] and append[100] to it.
End of explanation
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
print np.cross(x, y)
Explanation: Q11. Return the cross product of x and y.
End of explanation
x = np.array([1., 2., 3.], np.float32)
out = np.exp(x)
print out
Explanation: Exponents and logarithms
Q12. Compute $e^x$, element-wise.
End of explanation
x = np.array([1., 2., 3.], np.float32)
out1 = np.expm1(x)
out2 = np.exp(x) - 1.
assert np.allclose(out1, out2)
print out1
Explanation: Q13. Calculate exp(x) - 1 for all elements in x.
End of explanation
x = np.array([1., 2., 3.], np.float32)
out1 = np.exp2(x)
out2 = 2 ** x
assert np.allclose(out1, out2)
print out1
Explanation: Q14. Calculate $2^p$ for all p in x.
End of explanation
x = np.array([1, np.e, np.e**2])
print "natural log =", np.log(x)
print "common log =", np.log10(x)
print "base 2 log =", np.log2(x)
Explanation: Q15. Compute natural, base 10, and base 2 logarithms of x element-wise.
End of explanation
x = np.array([1e-99, 1e-100])
print np.log1p(x)
# Compare it with np.log(1 +x)
Explanation: Q16. Compute the natural logarithm of one plus each element in x in floating-point accuracy.
End of explanation
x = np.array([-3, -2, -1, 0, 1, 2, 3])
out1 = np.signbit(x)
out2 = x < 0
assert np.array_equiv(out1, out2)
print out1
Explanation: Floating point routines
Q17. Return element-wise True where signbit is set.
End of explanation
x = np.array([-1, 0, 1])
y = -1.1
print np.copysign(x, y)
Explanation: Q18. Change the sign of x to that of y, element-wise.
End of explanation
x = np.array([1, 2, 3])
y = np.array([-1, -2, -3])
out1 = np.add(x, y)
out2 = x + y
assert np.array_equal(out1, out2)
print out1
Explanation: Arithmetic operations
Q19. Add x and y element-wise.
End of explanation
x = np.array([3, 4, 5])
y = np.array(3)
out1 = np.subtract(x, y)
out2 = x - y
assert np.array_equal(out1, out2)
print out1
Explanation: Q20. Subtract y from x element-wise.
End of explanation
x = np.array([3, 4, 5])
y = np.array([1, 0, -1])
out1 = np.multiply(x, y)
out2 = x * y
assert np.array_equal(out1, out2)
print out1
Explanation: Q21. Multiply x by y element-wise.
End of explanation
x = np.array([3., 4., 5.])
y = np.array([1., 2., 3.])
out1 = np.true_divide(x, y)
out2 = x / y
assert np.array_equal(out1, out2)
print out1
out3 = np.floor_divide(x, y)
out4 = x // y
assert np.array_equal(out3, out4)
print out3
# Note that in Python 2 and 3, the handling of `divide` differs.
# See https://docs.scipy.org/doc/numpy/reference/generated/numpy.divide.html#numpy.divide
Explanation: Q22. Divide x by y element-wise in two different ways.
End of explanation
x = np.array([1, -1])
out1 = np.negative(x)
out2 = -x
assert np.array_equal(out1, out2)
print out1
Explanation: Q23. Compute numerical negative value of x, element-wise.
End of explanation
x = np.array([1., 2., .2])
out1 = np.reciprocal(x)
out2 = 1/x
assert np.array_equal(out1, out2)
print out1
Explanation: Q24. Compute the reciprocal of x, element-wise.
End of explanation
x = np.array([[1, 2], [3, 4]])
y = np.array([[1, 2], [1, 2]])
out = np.power(x, y)
print out
Explanation: Q25. Compute $x^y$, element-wise.
End of explanation
x = np.array([-3, -2, -1, 1, 2, 3])
y = 2
out1 = np.mod(x, y)
out2 = x % y
assert np.array_equal(out1, out2)
print out1
out3 = np.fmod(x, y)
print out3
Explanation: Q26. Compute the remainder of x / y element-wise in two different ways.
End of explanation
x = np.arange(10)
out1 = np.clip(x, 3, 7)
out2 = np.copy(x)
out2[out2 < 3] = 3
out2[out2 > 7] = 7
assert np.array_equiv(out1, out2)
print out1
Explanation: Miscellaneous
Q27. If an element of x is smaller than 3, replace it with 3.
And if an element of x is bigger than 7, replace it with 7.
End of explanation
x = np.array([1, 2, -1])
out1 = np.square(x)
out2 = x * x
assert np.array_equal(out1, out2)
print out1
Explanation: Q28. Compute the square of x, element-wise.
End of explanation
x = np.array([1., 4., 9.])
out = np.sqrt(x)
print out
Explanation: Q29. Compute square root of x element-wise.
End of explanation
x = np.array([[1, -1], [3, -3]])
out = np.abs(x)
print out
Explanation: Q30. Compute the absolute value of x.
End of explanation
x = np.array([1, 3, 0, -1, -3])
out1 = np.sign(x)
out2 = np.copy(x)
out2[out2 > 0] = 1
out2[out2 < 0] = -1
assert np.array_equal(out1, out2)
print out1
Explanation: Q31. Compute an element-wise indication of the sign of x, element-wise.
End of explanation |
13,914 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q2
More on writing functions!
A
Write a function, flexible_mean, which computes the average of any number of numbers.
takes a variable number of floating-point arguments
returns 1 number
Step1: B
Write a function, make_dict, which creates a dictionary from a variable number of key / value arguments.
takes a variable number of key-value arguments
returns 1 dictionary of all the key-values given to the function
For example, make_dict(one = "two", three = "four") should return {"one"
Step2: C
Write a function find_all which locates all the indices of a particular element to search.
takes 2 arguments
Step3: D
Using your answer from part C, write a function called element_counts which provides counts of very specific elements in a list.
takes 2 arguments | Python Code:
import numpy as np
np.testing.assert_allclose(1.5, flexible_mean(1.0, 2.0))
np.testing.assert_allclose(0.0, flexible_mean(-100, 100))
np.testing.assert_allclose(1303.359375, flexible_mean(1, 5452, 43, 34, 40.23, 605.2, 4239.2, 12.245))
Explanation: Q2
More on writing functions!
A
Write a function, flexible_mean, which computes the average of any number of numbers.
takes a variable number of floating-point arguments
returns 1 number: the average of all the arguments
For example, flexible_mean(1.0, 2.0) should return 1.5.
You cannot use any built-in functions.
End of explanation
assert make_dict(one = "two", three = "four") == {"one": "two", "three": "four"}
assert make_dict() == {}
Explanation: B
Write a function, make_dict, which creates a dictionary from a variable number of key / value arguments.
takes a variable number of key-value arguments
returns 1 dictionary of all the key-values given to the function
For example, make_dict(one = "two", three = "four") should return {"one": "two", "three": "four"}.
You cannot use any built-in functions.
End of explanation
l1 = [1, 2, 3, 4, 5, 2]
s1 = 2
a1 = [1, 5]
assert set(a1) == set(find_all(l1, s1))
l2 = ["a", "random", "set", "of", "strings", "for", "an", "interesting", "strings", "problem"]
s2 = "strings"
a2 = [4, 8]
assert set(a2) == set(find_all(l2, s2))
l3 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
s3 = 11
a3 = []
assert set(a3) == set(find_all(l3, s3))
Explanation: C
Write a function find_all which locates all the indices of a particular element to search.
takes 2 arguments: a list of items, and a single element to search for in the list
returns 1 list: a list of indices into the input list that correspond to elements in the input list that match what we were looking for
For example, find_all([1, 2, 3, 4, 5, 2], 2) would return [1, 5].
You cannot use any built-in functions.
End of explanation
l1 = [1, 2, 3, 4, 5, 2]
s1 = [2, 5]
a1 = {2: 2, 5: 1}
assert a1 == element_counts(l1, s1)
l2 = ["a", "random", "set", "of", "strings", "for", "an", "interesting", "strings", "problem"]
s2 = ["strings", "of", "notinthelist"]
a2 = {"strings": 2, "of": 1, "notinthelist": 0}
assert a2 == element_counts(l2, s2)
Explanation: D
Using your answer from part C, write a function called element_counts which provides counts of very specific elements in a list.
takes 2 arguments: a list of your data, and a list of elements you want counted in your data
returns a dictionary: keys are the elements you wanted counted, and values are their counts in the data
For example, element_counts([1, 2, 3, 4, 5, 2], [2, 5]) would return {2: 2, 5: 1}, as there were two 2s in the data list, and one 5.
You cannot use any built-in functions.
End of explanation |
13,915 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Periodic Signals & The Lomb-Scargle Periodogram
Version 0.1
By AA Miller (CIERA/Northwestern & Adler)
This notebook discusses the detection of periodic signals in noisy, irregular data (the standard for ground-based astronomical surveys). The discussion below is strongly influenced by Understanding the Lomb-Scarge Periodogram, by Jake VanderPlas, a former DSFP lecturer (VanderPlas 2017). Beyond that, the original papers by Lomb 1976 and Scargle 1982 are also worth a read.
There are many, many papers on the use of the Lomb-Scargle periodogram (and other period detection methods). Former DSFP lecturer, Matthew Graham and colleagues conducted a systematic analysis of many of the most popular tools used to search for periodic signals on actual astronomical data (Graham et al. 2013). [Somewhat to my (our?) dismay, they found that none of the solutions work really well across all use cases.]
Problem 1) Helper Functions
Throughout this lecture, there are a number of operations that we will be performing again and again. Thus, we'll create a few helper functions to minimize repetitive commands (e.g., plotting a phase folded light curve).
Problem 1a
Create a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions
Step1: Problem 1b
Generate a noise-free signal with $A = 2$ and $p = \pi$ over a regular grid between 0 and 10. Plot the results (and make sure gen_periodic_data behaves as you would expect).
Step2: Problem 1c
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
Step3: Problem 1d
Plot the phase folded data generated in 1b.
Does you plot match your expectations?
Step4: An important, but necessary, aside ––
the slightly unusual scheduling for this week means that the lectures will not unfold in the most natural order. An in depth examination of Fourier methods will occur on Wednesday and Thursday, and ideally this would happen before any discussion of the Lomb-Scargle periodogram.
However, Cohort 2 has already seen Lomb-Scargle and Gaussian Processes, so we will start with that and then cover Fourier methods. I'll proceed with a brief review of Fourier analysis, if you haven't seen this before it may be worth revisiting this notebook later in the week.
(Either way - the main takeaways from Lomb-Scargle will be clear by the end of this notebook)
Problem 2) A Brief Review of Fourier Analysis
In astronomical time series, we crave the detection of periodic signals because they can often provide fundamental insight into the sources we are studying (e.g., masses in a binary, classifications for pulsating stars, etc).
The standard$^\dagger$ choice for most astronomers to identify such signals is the Lomb-Scargle (LS) periodogram (Lomb 1976; Scargle 1982).
$^\dagger$Standard does not mean best, fastest, or even correct depending on your specific application.
At the heart of understanding any periodic signals is Fourier analysis. Thus, to understand how to interpret the LS periodogram, we first need to consider Fourier transforms.
Note - here we aim to jog your memory regarding Fourier analysis. Detailed mathematical calculations can be found elsewhere.
Given a continuous signal, $g(t)$ the Fourier transform of that signal is defined as
Step5: The common Fourier pairs are especially useful in light of the convolution theorem. Fourier transforms convert convolutions into point-wise products. We define a convolution as
Step6: Sampling a signal directly at the Nyquist frequency results in a lack of any variability. But does this just mean that $f_\mathrm{Ny}$ is special? What happens at $f > f_\mathrm{Ny}$?
Problem 2b
As above, generate and plot a periodic signal with $f = 0.7$ on an even grid from 0 to 10. Overplot the underlying signal in addition to the observations.
Step7: From the plot the signal is clearly variable (unlike when $f = f_\mathrm{Ny}$). However, there are fewer than 2 observations per cycle.
Problem 2c
Overplot a source with $f = 2.7$ on the same data shown in 2b.
Step8: The observations are identical! Here is what you need to remember about the Nyquist frequency
Step9: Problem 3b
Write a function to minimize the $\chi^2$ given everything but $A_f$ and $\phi_f$.
Hint - you may find minimize within the scipy package useful.
Step10: Problem 3c
Write a function, ls_periodogram, to calculate the LS periodogram for observations $y$, $\sigma_y$, $t$ over a frequency grid f_grid.
Step11: Problem 3d
Generate a periodic signal with 100 observations taken over a time period of 10 days. Use an input period of 5.25, amplitude of 7.4, and variance of the noise = 0.8. Then compute and plot the periodogram for the simulated data. Do you recover the simulated period?
Hint - set the minimum frequency in the grid to $1/T$ where $T$ is the duration of the observations. Set the maximum frequnecy to 10, and use an equally spaced grid with 50 points.
Step12: Problem 3e
For the same data, include 1000 points in f_grid and calculate and plot the periodogram.
Now do you recover the correct period?
Step13: Problem 3f
Plot the phase-folded data at the newly found "best" fit period.
Step14: Congratulations
You did it! You just developed the software necessary to find periodic signals in sparsely sampled, noisy data.
You are ready to conquer LSST.
But wait!
There should be a few things that are bothering you.
First and foremost, why did we use a grid with 50 points and then increase that to 1000 points for the previous simulation?
There are many important ramifications following the choice of an evaluation grid for the LS periodogram. When selecting the grid upon which to evaluate $f$ one must determine both the limits for the grid and the spacing within the grid.
The minimum frequency is straightforward
Step15: Problem 4) Other Considerations and Faster Implementations
While ls_periodogram functions well, it would take a long time to evaluate $\sim4\times 10^5$ frequencies for $\sim2\times 10^7$ variable LSST sources. Fortunately, there are significantly faster implementations, including (as you may have guessed) one in astropy.
Problem 4a
LombScargle in astropy.stats is fast. Run it below to compare to ls_periodogram.
Step16: Unlike ls_periodogram, LombScargle effectively takes no time to run on the simulated data.
Problem 4b
Plot the periodogram for the simulated data.
Step17: There are many choices regarding the calculation of the periodogram, so read the docs.
Floating Mean Periodogram
A basic assumption that we preivously made is that the data are "centered" - in other words, our model explicitly assumes that the signal oscillates about a mean of 0.
For astronomical applications, this assumption can be harmful. Instead, it is useful to fit for the mean of the signal in addition to the periodic component (as is the default in LombScargle)
Step18: We can see that the best fit model doesn't match the signal in the case where we do not allow a floating mean.
Step19: Window Functions
Recall that the convolution theorem tells us that
Step20: Problem 4e
Calculate and plot the periodogram for the window function (i.e., set y = 1 in LombScargle) of the observations. Do you notice any significant power?
Hint - you may need to zoom in on the plot to see all the relevant features.
Step21: Interestingly, there are very strong peaks in the data at $P \approx 3\,\mathrm{d} \;\&\; 365\,\mathrm{d}$.
What is this telling us? Essentially that observations are likely to be repeated at intervals of 3 or 365 days (shorter period spikes are aliases of the 3 d peak).
This is important to understand, however, because this same power will be present in the periodogram where we search for the periodic signal.
Problem 4f
Calculate the periodogram for the data and compare it to the periodogram for the window function.
Step22: Uncertainty on the best-fit period
How do we report uncertainties on the best-fit period from LS? For example, for the previously simulated LSST light curve we would want to report something like $P = 102 \pm 4\,\mathrm{d}$. However, the uncertainty from LS periodograms cannot be determined in this way.
Naively, one could report the width of the peak in the periodogram as the uncertainty in the fit. However, we previously saw that the peak width $\propto 1/T$ (the peak width does not decrease as the number of observations or their S/N increases; see Vander Plas 2017). Reporting such an uncertainty is particularly ridiculous for long duration surveys, whereby the peaks become very very narrow.
An alternative approach is to report the False Alarm Probability (FAP), which estimates the probability that a dataset with no periodic signal could produce a peak of similar magnitude, due to random gaussian fluctuations, as the data.
There are a few different methods to calculate the FAP. Perhaps the most useful, however, is the bootstrap method. To obtain a bootstrap estimate of the LS FAP one leaves the observation times fixed, and then draws new observation values with replacement from the actual set of observations. This procedure is then repeated many times to determine the FAP.
One nice advantage of this procedure is that any effects due to the window function will be imprinted in each iteration of the bootstrap resampling.
The major disadvantage is that many many periodograms must be calculated. The rule of thumb is that to acieve a FAP $= p_\mathrm{false}$, one must run $n_\mathrm{boot} \approx 10/p_\mathrm{false}$ bootstrap periodogram calculations. Thus, an FAP $\approx 0.1\%$ requires an increase of 1000 in computational time.
LombScargle provides the false_alarm_probability method, including a bootstrap option. We skip that for now in the interest of time.
As a final note of caution - be weary of over-interpreting the FAP. The specific question answered by the FAP is, what is the probability that gaussian fluctations could produce a signal of equivalent magnitude? Whereas, the question we generally want to answer is
Step23: Problem 5b
Use LombScargle to measure the periodogram. Then plot the periodogram and the phase folded light curve at the best-fit period.
Hint - search periods longer than 2 hr.
Step24: Problem 5c
Now plot the light curve folded at twice the best LS period.
Which of these 2 is better?
Step25: Herein lies a fundamental issue regarding the LS periodogram
Step26: One way to combat this very specific issue is to include more Fourier terms at the harmonic of the best fit period. This is easy to implement in LombScargle with the nterms keyword. [Though always be weary of adding degrees of freedom to a model, especially at the large pipeline level of analysis.]
Problem 5d
Calculate the LS periodogram for the eclipsing binary, with nterms = 1, 2, 3, 4, 5. Report the best-fit period for each of these models.
Hint - we have good reason to believe that the best fit frequency is < 3 in this case, so set maximum_frequency = 3.
Step27: Interestingly, for the $n=2, 3, 4$ harmonics, it appears as though we get the period that we have visually confirmed. However, by $n=5$ harmonics we no longer get a reasonable answer. Again - be very careful about adding harmonics, especially in large analysis pipelines.
What does the $n = 4$ model look like?
Step28: This example also shows why it is somewhat strange to provide an uncertainty with a LS best-fit period. Errors tend to be catastrophic, and not some small fractional percentage, with the LS periodogram.
In the case of the above EB, the "best-fit" period was off by a factor 2. This is not isolated to EBs, however, LS periodograms frequently identify a correct harmonic of the true period, but not the actual period of variability.
Conclusions
The Lomb-Scargle periodogram is a useful tool to search for sinusoidal signals in noisy, irregular data.
However, as highlighted throughout, there are many ways in which the methodology can run awry.
In closing, I will summarize some practical considerations from VanderPlas (2017) | Python Code:
def gen_periodic_data(x, period=1, amplitude=1, phase=0, noise=0):
'''Generate periodic data given the function inputs
y = A*cos(x/p - phase) + noise
Parameters
----------
x : array-like
input values to evaluate the array
period : float (default=1)
period of the periodic signal
amplitude : float (default=1)
amplitude of the periodic signal
phase : float (default=0)
phase offset of the periodic signal
noise : float (default=0)
variance of the noise term added to the periodic signal
Returns
-------
y : array-like
Periodic signal evaluated at all points x
'''
y = amplitude*np.sin(2*np.pi*x/(period) - phase) + np.random.normal(0, np.sqrt(noise), size=len(x))
return y
Explanation: Periodic Signals & The Lomb-Scargle Periodogram
Version 0.1
By AA Miller (CIERA/Northwestern & Adler)
This notebook discusses the detection of periodic signals in noisy, irregular data (the standard for ground-based astronomical surveys). The discussion below is strongly influenced by Understanding the Lomb-Scarge Periodogram, by Jake VanderPlas, a former DSFP lecturer (VanderPlas 2017). Beyond that, the original papers by Lomb 1976 and Scargle 1982 are also worth a read.
There are many, many papers on the use of the Lomb-Scargle periodogram (and other period detection methods). Former DSFP lecturer, Matthew Graham and colleagues conducted a systematic analysis of many of the most popular tools used to search for periodic signals on actual astronomical data (Graham et al. 2013). [Somewhat to my (our?) dismay, they found that none of the solutions work really well across all use cases.]
Problem 1) Helper Functions
Throughout this lecture, there are a number of operations that we will be performing again and again. Thus, we'll create a few helper functions to minimize repetitive commands (e.g., plotting a phase folded light curve).
Problem 1a
Create a function, gen_periodic_data, that creates simulated data (including noise) over a grid of user supplied positions:
$$ y = A\,cos\left(\frac{2{\pi}x}{P} - \phi\right) + \sigma_y$$
where $A, P, \phi$ are inputs to the function. gen_periodic_data should include Gaussian noise, $\sigma_y$, for each output $y_i$.
End of explanation
x = # complete
y = gen_periodic_data( # complete
fig, ax = plt.subplots()
ax.scatter( # complete
# complete
# complete
fig.tight_layout()
Explanation: Problem 1b
Generate a noise-free signal with $A = 2$ and $p = \pi$ over a regular grid between 0 and 10. Plot the results (and make sure gen_periodic_data behaves as you would expect).
End of explanation
def phase_plot(x, y, period, y_unc = 0.0):
'''Create phase-folded plot of input data x, y
Parameters
----------
x : array-like
data values along abscissa
y : array-like
data values along ordinate
period : float
period to fold the data
y_unc : array-like
uncertainty of the
'''
phases = (x/period) % 1
if type(y_unc) == float:
y_unc = np.zeros_like(x)
plot_order = np.argsort(phases)
fig, ax = plt.subplots()
ax.errorbar(phases[plot_order], y[plot_order], y_unc[plot_order],
fmt='o', mec="0.2", mew=0.1)
ax.set_xlabel("phase")
ax.set_ylabel("signal")
fig.tight_layout()
Explanation: Problem 1c
Create a function, phase_plot, that takes x, y, and $P$ as inputs to create a phase-folded light curve (i.e., plot the data at their respective phase values given the period $P$).
Include an optional argument, y_unc, to include uncertainties on the y values, when available.
End of explanation
phase_plot( # complete
Explanation: Problem 1d
Plot the phase folded data generated in 1b.
Does you plot match your expectations?
End of explanation
fourier_pairs_plot()
Explanation: An important, but necessary, aside ––
the slightly unusual scheduling for this week means that the lectures will not unfold in the most natural order. An in depth examination of Fourier methods will occur on Wednesday and Thursday, and ideally this would happen before any discussion of the Lomb-Scargle periodogram.
However, Cohort 2 has already seen Lomb-Scargle and Gaussian Processes, so we will start with that and then cover Fourier methods. I'll proceed with a brief review of Fourier analysis, if you haven't seen this before it may be worth revisiting this notebook later in the week.
(Either way - the main takeaways from Lomb-Scargle will be clear by the end of this notebook)
Problem 2) A Brief Review of Fourier Analysis
In astronomical time series, we crave the detection of periodic signals because they can often provide fundamental insight into the sources we are studying (e.g., masses in a binary, classifications for pulsating stars, etc).
The standard$^\dagger$ choice for most astronomers to identify such signals is the Lomb-Scargle (LS) periodogram (Lomb 1976; Scargle 1982).
$^\dagger$Standard does not mean best, fastest, or even correct depending on your specific application.
At the heart of understanding any periodic signals is Fourier analysis. Thus, to understand how to interpret the LS periodogram, we first need to consider Fourier transforms.
Note - here we aim to jog your memory regarding Fourier analysis. Detailed mathematical calculations can be found elsewhere.
Given a continuous signal, $g(t)$ the Fourier transform of that signal is defined as:
$$\hat{\mathrm{g}}(f) = \int_{-\infty}^{\infty} g(t) \,e^{-2\pi i f t} \,dt,$$
where $i$ is an imaginary number. The inverse of this equation is defined as:
$$ g(t) = \int_{-\infty}^{\infty} \hat{\mathrm{g}}(f) \,e^{2\pi i f t} \,df.$$
For convenience, we will use the Fourier transform operator $\mathcal{F}$, from which the above equations reduce to:
$$\mathcal{F}(g) = \hat g$$
$$\mathcal{F}^{-1}(\hat{g}) = g$$
There are many useful properties of the Fourier transform including that the Fourier transform is a linear operator. Additionally, a time shift imparts a phase shift. Perhaps most importantly for our present purposes, however, is that the squared amplitude of the resulting transform allows us to get rid of the imaginary component and measure the power spectral density or power spectrum:
$$ \mathcal{P}_g = \left|\mathcal{F}(g)\right|^2.$$
The power spectrum is a real-valued function that quantifies the contribution of each frequency $f$ to the total signal in $g$. The power spectrum thus provides a way to identify the dominant frequency in any given signal.
Next we consider some common Fourier pairs, that will prove helpful in our interpretation of the LS periodogram.
End of explanation
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
ax.plot( # complete
fig.tight_layout()
Explanation: The common Fourier pairs are especially useful in light of the convolution theorem. Fourier transforms convert convolutions into point-wise products. We define a convolution as:
$$ [f \ast g] (t) = \int_{-\infty}^{\infty} f(\tau) \,g(t - \tau) \,d\tau,$$
where $\ast$ is the convolution symbol. From the convolution theorem:
$$ \mathcal{F} {f \ast g} = \mathcal{F}(f) \mathcal{F}(g) $$
Furthermore, the Fourier transform of a product is equal to the convolution of the Fourier transforms:
$$ \mathcal{F}{f \cdot g} = \mathcal{F}(f) \ast \mathcal{F}(g) $$
This property will be very important for understanding the Lomb-Scargle periodogram.
Fourier transforms are all well and good, but ultimately we desire a measure of periodicity in actual observations of astrophysical sources, which cannot be (a) continuous, or (b) infinite.
The first thing to understand with real world observations is the Nyquist frequency limit. If observations are obtained in a uniformly spaced manner at a rate of $f_0 = 1/T$ one can only recover the frequncy information if the signal is band-limited between frequencies $\pm f_0/2$. Put another way, the highest frequency that can be detected in such data is $f_0/2$.
This result can be (somewhat) intuited by looking at simulated data.
Problem 2a
Generate and plot a periodic signal with $f = f_\mathrm{Ny} = 1/2$ on a grid from 0 to 10, comprising of 10 even samples (i.e., 0, 1, 2, 3, ..., 10). Overplot the underlying signal in addition to the observations.
End of explanation
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
fig, ax = plt.subplots()
ax.scatter( # complete
ax.plot( # complete
fig.tight_layout()
Explanation: Sampling a signal directly at the Nyquist frequency results in a lack of any variability. But does this just mean that $f_\mathrm{Ny}$ is special? What happens at $f > f_\mathrm{Ny}$?
Problem 2b
As above, generate and plot a periodic signal with $f = 0.7$ on an even grid from 0 to 10. Overplot the underlying signal in addition to the observations.
End of explanation
x = # complete
y = # complete
x_signal = # complete
y_signal = # complete
# complete
# complete
# complete
# complete
Explanation: From the plot the signal is clearly variable (unlike when $f = f_\mathrm{Ny}$). However, there are fewer than 2 observations per cycle.
Problem 2c
Overplot a source with $f = 2.7$ on the same data shown in 2b.
End of explanation
def chi2( # complete
# complete
# complete
# complete
Explanation: The observations are identical! Here is what you need to remember about the Nyquist frequency:
If you are going to obtain observations at regular intervals, and there is a specific signal you wish to detect, then be sure to sample the data such that $f_\mathrm{Ny} > f_\mathrm{signal}$.
For all $f > f_\mathrm{Ny}$, $f$ will be aliased with $f \pm 2n f_\mathrm{Ny}$ signals, where $n$ is an integer. Practically speaking, this means it does not make sense to search for signals with $f > f_\mathrm{Ny}$.
Finally, (and this is something that is often wrong in the literature) there is no Nyquist limit for unevenly sampled data (see VanderPlas 2017 for further details). Thus, for (virtually all) ground-based observing one need not worry about the Nyquist limit.
Staying on the topic of non-continuous observations, I present without derivation the discrete Fourier transform:
$$ \hat g_\mathrm{obs}(f) = \sum_{n = 0}^N g_n\,e^{-2\pi i f n\Delta t}$$
where $g_n = g(n\Delta t)$, and $\Delta t$ is the sampling interval. Our discussion of the Nyquist frequency tells us that we cannot detect frequencies $f > 1/2\Delta T$. Thus, the relevant frequencies to search for power given $\Delta t$ are between 0 and $f_\mathrm{Ny}$, which we can sample on a grid $\Delta f = 1/(N \Delta t)$. From there:
$$\hat g_k = \sum_{n = 0}^N g_n\,e^{-2\pi i f knN}$$
where $\hat g_k = \hat g_\mathrm{obs} (k\Delta f)$. This is the discrete Fourier transform.
I said a full derivation will not be provided, and that is true. To understand how we went from a continuous integral to the summation above, recall that regular observations of a continous signal over a finite interval is equivalent to multiplying the continuous signal by a Dirac comb function and a window function. The delta functions from the Dirac comb function collapse the integral to a sum, while the window function limits that sum from $0$ to $N$.
From the discrete Fourier transform we can then calculate the periodogram (an estimator of the power spectrum):
$$\mathcal{P}(f) = \frac{1}{N}\left|\sum_{n=1}^{N} g_n\,e^{-2\pi i f knN}\right|^2$$
which is also called the classical periodogram or the Schuster periodogram (Schuster 1898).
Problem 3) The LS Periodogram
Of course, we ultimately care about applications where the data are not perfectly uniformly sampled (even Kepler data is not uniformly sampled). We can re-write the classical periodogram in a more general way:
$$\mathcal{P}(f) = \frac{1}{N}\left|\sum_{n=1}^{N} g_n\,e^{-2\pi i f t_n}\right|^2$$
where $t_n$ corresponds to the observation times. Irregular sampling removes a lot of the nice statistical properties of the discrete Fourier transform. Scargle (1982) was able to address these issues via a generalized form of the periodogram.
[For completeness, there are some long equations that I should include here, but I won't...]
Instead, I will simplify things slightly by using the fact that Scargle's modified periodogram is identical to the result one obtains by fitting a sinusoid model to the data at each frequency $f$ and constructing a "periodogram" from the corresponding $\chi^2$ values at each frequency $f$ [this was considered in great detail by Lomb (1976)].
Note - to this day I find this particular identity remarkable.
Thus, using the model:
$$y(t;f) = A_f \sin(2\pi f(t - \phi_f)$$
we can calculate the $\chi^2$ for every frequency $f$:
$$\chi^2 = \sum_n (y_n - y(t_n; f))^2$$
The "best" model for a given frequency requires the selection of $A_f$ and $\phi_f$ that minimizes $\chi^2$, which we will call $\hat \chi^2$. Scargle (1982) then showed that the Lomb-Scargle periodogram can be written
$$\mathcal{P}_\mathrm{LS}(f) = \frac{1}{2}\left[ \hat \chi^2_0 - \hat \chi^2(f) \right]$$
where $\hat \chi^2_0$ is the value for a non-varying reference model.
This realization further enables the inclusion of observational uncertainty in the periodogram, via a familiar adjustment to the $\chi^2$ value:
$$\chi^2 = \sum_n \left(\frac{y_n - y(t_n; f)}{\sigma_n}\right)^2$$
where $\sigma_n$ is the uncertainty on the individual measurements, $y_n$.
From here it follows that we can construct a LS periodogram.
Problem 3a
Write a function, chi2, to calculate the $\chi^2$ given $f$, $A_f$, and $\phi$, for observations $y_n$ with uncertainties $\sigma_{y,n}$ taken at times $t_n$.
Hint - it may be helpful to access $A_f$, and $\phi$ from a single variable theta, where a = theta[0] and phi = theta[1]
End of explanation
from scipy.optimize import minimize
def min_chi2( # complete
# complete
# complete
Explanation: Problem 3b
Write a function to minimize the $\chi^2$ given everything but $A_f$ and $\phi_f$.
Hint - you may find minimize within the scipy package useful.
End of explanation
def ls_periodogram( # complete
psd = np.empty_like(f_grid)
chi2_0 = # complete
for f_num, f in enumerate(f_grid):
psd[f_num] = # complete
return psd
Explanation: Problem 3c
Write a function, ls_periodogram, to calculate the LS periodogram for observations $y$, $\sigma_y$, $t$ over a frequency grid f_grid.
End of explanation
np.random.seed(23)
# calculate the periodogram
x = # complete
y = # complete
y_unc = # complete
f_grid = # complete
psd_ls = # complete
# plot the periodogram
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
fig.tight_layout()
Explanation: Problem 3d
Generate a periodic signal with 100 observations taken over a time period of 10 days. Use an input period of 5.25, amplitude of 7.4, and variance of the noise = 0.8. Then compute and plot the periodogram for the simulated data. Do you recover the simulated period?
Hint - set the minimum frequency in the grid to $1/T$ where $T$ is the duration of the observations. Set the maximum frequnecy to 10, and use an equally spaced grid with 50 points.
End of explanation
# calculate the periodogram
f_grid = # complete
psd_ls = # complete
# plot the periodogram
fig,ax = plt.subplots()
ax.plot(# complete
# complete
# complete
fig.tight_layout()
print("The best fit period is: {:.4f}".format( # complete
Explanation: Problem 3e
For the same data, include 1000 points in f_grid and calculate and plot the periodogram.
Now do you recover the correct period?
End of explanation
phase_plot( # complete
Explanation: Problem 3f
Plot the phase-folded data at the newly found "best" fit period.
End of explanation
f_min = # complete
f_max = # complete
delta_f = # complete
f_grid = np.arange( # complete
print("{:d} grid points are needed to sample the periodogram".format( # complete
Explanation: Congratulations
You did it! You just developed the software necessary to find periodic signals in sparsely sampled, noisy data.
You are ready to conquer LSST.
But wait!
There should be a few things that are bothering you.
First and foremost, why did we use a grid with 50 points and then increase that to 1000 points for the previous simulation?
There are many important ramifications following the choice of an evaluation grid for the LS periodogram. When selecting the grid upon which to evaluate $f$ one must determine both the limits for the grid and the spacing within the grid.
The minimum frequency is straightforward: $f_\mathrm{min} = 1/T$ corresponds to a signal that experiences 1 cycle in the span of the data. Computationally, $f_\mathrm{min} = 0$ does not add much time.
The maximum frequency is straightforward: $f_\mathrm{Ny}$ (if you have evenly spaced data). What if the data are not evenly spaced (a situation for which we said $f_\mathrm{Ny}$ does not exist?
There are many ad-hoc methods described in the literature, such as $f_\mathrm{max} = 1/<\Delta T>$, where $<\Delta T>$ is the mean separation of consecutive observations. Again - this is not correct.
VanderPlas (2017) discusses maximum frequencies for non-uniform data (see that paper for more details). My useful practical advice is to set $f_\mathrm{max}$ to the maximum frequency that you might expect to see in the data (for example, with the exception of a few extreme white dwarf systems, essentially no stars show periodicity at $< 1\,\mathrm{hr}$).
Of course, we still haven't decided what grid to adopt. As we saw above - if we use too few points, we will not resolve the peak in the periodogram. Alternatively, if we include too many points in the grid we will waste a lot of computation.
Fortunately, figuring out $\Delta f$ is relatively straightfoward: from above we saw that the Fourier transform of a window function of length $T$ produces a sinc signal with width $\sim 1/T$. Thus, we need $\Delta f$ to sample $\sim 1/T$, which means $\Delta f = 1/n_0T$, where $n_0$ is a constant, and 5 is a good choice for $n_0$.
Problem 3g
Calculate the optimal grid for sampling LSST data. Assume that the time measurements are carried out in units of days, the survey lasts for 10 years and the shortest period expected in the data is 1 hr.
How many evaluations are needed to calculate the periodogram?
End of explanation
from astropy.stats import LombScargle
frequency, power = LombScargle(x, y, y_unc).autopower()
Explanation: Problem 4) Other Considerations and Faster Implementations
While ls_periodogram functions well, it would take a long time to evaluate $\sim4\times 10^5$ frequencies for $\sim2\times 10^7$ variable LSST sources. Fortunately, there are significantly faster implementations, including (as you may have guessed) one in astropy.
Problem 4a
LombScargle in astropy.stats is fast. Run it below to compare to ls_periodogram.
End of explanation
fig, ax = plt.subplots()
ax.plot( # complete
# complete
# complete
# complete
fig.tight_layout()
Explanation: Unlike ls_periodogram, LombScargle effectively takes no time to run on the simulated data.
Problem 4b
Plot the periodogram for the simulated data.
End of explanation
# complete
freq_no_mean, power_no_mean = LombScargle( # complete
freq_fit_mean, power_fit_mean = LombScargle( # complete
fig, (ax1, ax2) = plt.subplots(2, 1, sharex=True)
ax1.plot( # complete
ax2.plot( # complete
ax1.set_xlim(0,15)
fig.tight_layout()
Explanation: There are many choices regarding the calculation of the periodogram, so read the docs.
Floating Mean Periodogram
A basic assumption that we preivously made is that the data are "centered" - in other words, our model explicitly assumes that the signal oscillates about a mean of 0.
For astronomical applications, this assumption can be harmful. Instead, it is useful to fit for the mean of the signal in addition to the periodic component (as is the default in LombScargle):
$$y(t;f) = y_0(f) + A_f \sin(2\pi f(t - \phi_f).$$
To illustrate why this is important for astronomy, assume that any signal fainter than $-2$ in our simulated data cannot be detected.
Problem 4c
Remove the observations from x and y where $y \le -2$ and calculate the periodogram both with and without fitting the mean (fit_mean = False in the call to LombScargle). Plot the periodograms. Do both methods recover the correct period?
End of explanation
fit_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=True).model(np.linspace(0,10,1000),
freq_fit_mean[np.argmax(power_fit_mean)])
no_mean_model = LombScargle(x[bright], y[bright], y_unc[bright],
fit_mean=False).model(np.linspace(0,10,1000),
freq_no_mean[np.argmax(power_no_mean)])
fig, ax = plt.subplots()
ax.errorbar(x[bright], y[bright], y_unc[bright], fmt='o', label='data')
ax.plot(np.linspace(0,10,1000), fit_mean_model, label='fit mean')
ax.plot(np.linspace(0,10,1000), no_mean_model, label='no mean')
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.legend()
fig.tight_layout()
Explanation: We can see that the best fit model doesn't match the signal in the case where we do not allow a floating mean.
End of explanation
# set up simulated observations
t_obs = np.arange(0, 10*365, 3) # 3d cadence
# complete
# complete
# complete
y = gen_periodic_data( # complete
# complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel("Time (d)")
ax.set_ylabel("Flux (arbitrary units)")
Explanation: Window Functions
Recall that the convolution theorem tells us that:
$$\mathcal{F}[f\cdot g] = \mathcal{F}(f) \ast \mathcal{F}(g)$$
Telescope observations are effectively the product of a continous signal with several delta functions (corresponding to the times of observations). As a result, the convolution that produces the periodogram will retain signal from both the source and the observational cadence.
To illustrate this effect, let us simulate "realistic" observations for a 10 year telescope survey. We do this by assuming that a source is observed every 3 nights (the LSST cadence) within $\pm 4\,\mathrm{hr}$ of the same time, and that $\sim 30\%$ of the observations did not occur due to bad weather. We further assume that the source cannot be observed for 40% of the year because it is behind the sun.
Simulate a periodic signal with this cadence, a period = 220 days (typical for Miras), amplitude = 12.4, and noise = 1. Plot the simulated light curve.
Problem 4d
Simulate a periodic signal with 3 day cadence (and the observing conditions described above), a period = 220 days (typical for Miras), amplitude = 12.4, and variance of the noise = 1. Plot the simulated light curve.
End of explanation
ls = LombScargle( # complete
freq_window, power_window = # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0,500)
axins = plt.axes([.2, .65, .5, .2])
axins.plot( # complete
axins.set_xlim(0,5)
Explanation: Problem 4e
Calculate and plot the periodogram for the window function (i.e., set y = 1 in LombScargle) of the observations. Do you notice any significant power?
Hint - you may need to zoom in on the plot to see all the relevant features.
End of explanation
ls = LombScargle( # complete
frequency, power = # complete
fig, (ax,ax2) = plt.subplots(2,1, sharex=True)
ax.plot( # complete
ax.set_ylabel("Power")
ax.set_ylim(0,1)
ax2.plot( # complete
ax2.set_ylabel("Power")
ax2.set_xlabel("Period (d)")
ax2.set_xlim(0,10)
fig.tight_layout()
Explanation: Interestingly, there are very strong peaks in the data at $P \approx 3\,\mathrm{d} \;\&\; 365\,\mathrm{d}$.
What is this telling us? Essentially that observations are likely to be repeated at intervals of 3 or 365 days (shorter period spikes are aliases of the 3 d peak).
This is important to understand, however, because this same power will be present in the periodogram where we search for the periodic signal.
Problem 4f
Calculate the periodogram for the data and compare it to the periodogram for the window function.
End of explanation
data = # complete
fig, ax = plt.subplots()
ax.errorbar( # complete
ax.set_xlabel('HJD (d)')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
Explanation: Uncertainty on the best-fit period
How do we report uncertainties on the best-fit period from LS? For example, for the previously simulated LSST light curve we would want to report something like $P = 102 \pm 4\,\mathrm{d}$. However, the uncertainty from LS periodograms cannot be determined in this way.
Naively, one could report the width of the peak in the periodogram as the uncertainty in the fit. However, we previously saw that the peak width $\propto 1/T$ (the peak width does not decrease as the number of observations or their S/N increases; see Vander Plas 2017). Reporting such an uncertainty is particularly ridiculous for long duration surveys, whereby the peaks become very very narrow.
An alternative approach is to report the False Alarm Probability (FAP), which estimates the probability that a dataset with no periodic signal could produce a peak of similar magnitude, due to random gaussian fluctuations, as the data.
There are a few different methods to calculate the FAP. Perhaps the most useful, however, is the bootstrap method. To obtain a bootstrap estimate of the LS FAP one leaves the observation times fixed, and then draws new observation values with replacement from the actual set of observations. This procedure is then repeated many times to determine the FAP.
One nice advantage of this procedure is that any effects due to the window function will be imprinted in each iteration of the bootstrap resampling.
The major disadvantage is that many many periodograms must be calculated. The rule of thumb is that to acieve a FAP $= p_\mathrm{false}$, one must run $n_\mathrm{boot} \approx 10/p_\mathrm{false}$ bootstrap periodogram calculations. Thus, an FAP $\approx 0.1\%$ requires an increase of 1000 in computational time.
LombScargle provides the false_alarm_probability method, including a bootstrap option. We skip that for now in the interest of time.
As a final note of caution - be weary of over-interpreting the FAP. The specific question answered by the FAP is, what is the probability that gaussian fluctations could produce a signal of equivalent magnitude? Whereas, the question we generally want to answer is: did a periodic signal produce these data?
These questions are very different, and thus, the FAP cannot be used to prove that a source is periodic.
Problem 5) Real-world considerations
We have covered many, though not all, considerations that are necessary when employing a Lomb Scargle periodogram. We have not yet, however, encountered real world data. Here we highlight some of the issues associated with astronomical light curves.
We will now use LS to analyze actual data from the All Sky Automated Survey (ASAS). Download the example light curve.
Problem 5a
Read in the light curve from example_asas_lc.dat. Plot the light curve.
Hint - I recommend using astropy Tables or pandas dataframe.
End of explanation
frequency, power = # complete
# complete
fig,ax = plt.subplots()
ax.plot(# complete
ax.set_ylabel("Power")
ax.set_xlabel("Period (d)")
ax.set_xlim(0, 800)
axins = plt.axes([.25, .55, .6, .3])
axins.plot( # complete
axins.set_xlim(0,5)
fig.tight_layout()
# plot the phase folded light curve
phase_plot( # complete
Explanation: Problem 5b
Use LombScargle to measure the periodogram. Then plot the periodogram and the phase folded light curve at the best-fit period.
Hint - search periods longer than 2 hr.
End of explanation
phase_plot( # complete
Explanation: Problem 5c
Now plot the light curve folded at twice the best LS period.
Which of these 2 is better?
End of explanation
fig, ax = plt.subplots()
ax.errorbar(data['hjd']/ls_period % 1, data['mag'], data['mag_unc'], fmt='o', zorder=-1)
ax.plot(np.linspace(0,1,1000),
LombScargle(data['hjd'],data['mag'], data['mag_unc']).model(np.linspace(0,1,1000)*ls_period, 1/ls_period) )
ax.set_xlabel('Phase')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
Explanation: Herein lies a fundamental issue regarding the LS periodogram: the model does not search for "periodicity." The LS model asks if the data support a sinusoidal signal. As astronomers we typically assume this question is good enough, but as we can see in the example of this eclipsing binary that is not the case [and this is not limited to eclipsing binaries].
We can see why LS is not sufficient for an EB by comparing the model to the phase-folded light curve:
End of explanation
for i in np.arange(1,6):
frequency, power = # complete
# complete
print('For {:d} harmonics, P_LS = {:.8f}'.format( # complete
Explanation: One way to combat this very specific issue is to include more Fourier terms at the harmonic of the best fit period. This is easy to implement in LombScargle with the nterms keyword. [Though always be weary of adding degrees of freedom to a model, especially at the large pipeline level of analysis.]
Problem 5d
Calculate the LS periodogram for the eclipsing binary, with nterms = 1, 2, 3, 4, 5. Report the best-fit period for each of these models.
Hint - we have good reason to believe that the best fit frequency is < 3 in this case, so set maximum_frequency = 3.
End of explanation
best_period = 0.73508568
fig, ax = plt.subplots()
ax.errorbar((data['hjd'])/best_period % 1, data['mag'], data['mag_unc'], fmt='o',zorder=-1)
ax.plot(np.linspace(0,1,1000),
LombScargle(data['hjd'],data['mag'], data['mag_unc'],
nterms=4).model(np.linspace(0,1,1000)*best_period, 1/best_period) )
ax.set_xlabel('Phase')
ax.set_ylabel('V (mag)')
ax.set_ylim(ax.get_ylim()[::-1])
fig.tight_layout()
Explanation: Interestingly, for the $n=2, 3, 4$ harmonics, it appears as though we get the period that we have visually confirmed. However, by $n=5$ harmonics we no longer get a reasonable answer. Again - be very careful about adding harmonics, especially in large analysis pipelines.
What does the $n = 4$ model look like?
End of explanation
def fourier_pairs_plot():
fig, ax = plt.subplots(4, 2, figsize=(10, 6))
fig.subplots_adjust(left=0.04, right=0.98, bottom=0.02, top=0.95,
hspace=0.3, wspace=0.2)
x = np.linspace(-5, 5, 1000)
for axi in ax.flat:
axi.xaxis.set_major_formatter(plt.NullFormatter())
axi.yaxis.set_major_formatter(plt.NullFormatter())
# draw center line
axi.axvline(0, linestyle='dotted', color='gray')
axi.axhline(0, linestyle='dotted', color='gray')
style_re = dict(linestyle='solid', color='k', linewidth=2)
style_im = dict(linestyle='solid', color='gray', linewidth=2)
text_style = dict(size=14, color='gray')
# sine -> delta
ax[0, 0].plot(x, np.cos(x),**style_re)
ax[0, 0].set(xlim=(-5, 5), ylim=(-1.2, 1.2))
ax[0, 0].annotate('', (-np.pi, 0), (np.pi, 0),
arrowprops=dict(arrowstyle='|-|', color='gray'))
ax[0, 0].text(0, 0, '$1/f_0$', ha='center', va='bottom', **text_style)
ax[0, 0].set_title('Sinusoid')
ax[0, 1].plot([-5, 2, 2, 2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].plot([-5, -2, -2, -2, 5], [0, 0, 1, 0, 0], **style_re)
ax[0, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[0, 1].annotate('', (0, 0.4), (2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].annotate('', (0, 0.4), (-2, 0.4), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[0, 1].text(1, 0.45, '$+f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].text(-1, 0.45, '$-f_0$', ha='center', va='bottom', **text_style)
ax[0, 1].set_title('Delta Functions')
# gaussian -> gaussian
ax[1, 0].plot(x, np.exp(-(2 * x) ** 2), **style_re)
ax[1, 0].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 0].annotate('', (0, 0.35), (0.6, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 0].text(0, 0.4, '$\sigma$', ha='center', va='bottom', **text_style)
ax[1, 0].set_title('Gaussian')
ax[1, 1].plot(x, np.exp(-(x / 2) ** 2), **style_re)
ax[1, 1].set(xlim=(-5, 5), ylim=(-0.2, 1.2))
ax[1, 1].annotate('', (0, 0.35), (2, 0.35), arrowprops=dict(arrowstyle='<-', color='gray'))
ax[1, 1].text(0, 0.4, '$(2\pi\sigma)^{-1}$', ha='center', va='bottom', **text_style)
ax[1, 1].set_title('Gaussian')
# top hat -> sinc
ax[2, 0].plot([-2, -1, -1, 1, 1, 2], [0, 0, 1, 1, 0, 0], **style_re)
ax[2, 0].set(xlim=(-2, 2), ylim=(-0.3, 1.2))
ax[2, 0].annotate('', (-1, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 0].text(0.0, 0.5, '$T$', ha='center', va='bottom', **text_style)
ax[2, 0].set_title('Top Hat')
ax[2, 1].plot(x, np.sinc(x), **style_re)
ax[2, 1].set(xlim=(-5, 5), ylim=(-0.3, 1.2))
ax[2, 1].annotate('', (-1, 0), (1, 0), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[2, 1].text(0.0, 0.0, '$2/T$', ha='center', va='bottom', **text_style)
ax[2, 1].set_title('Sinc')
# comb -> comb
ax[3, 0].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 0].set(xlim=(-5.5, 5.5), ylim=(-0.2, 1.2))
ax[3, 0].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 0].text(0.5, 0.6, '$T$', ha='center', va='bottom', **text_style)
ax[3, 0].set_title('Dirac Comb')
ax[3, 1].plot([-5.5] + sum((3 * [i] for i in range(-5, 6)), []) + [5.5],
[0] + 11 * [0, 1, 0] + [0], **style_re)
ax[3, 1].set(xlim=(-2.5, 2.5), ylim=(-0.2, 1.2));
ax[3, 1].annotate('', (0, 0.5), (1, 0.5), arrowprops=dict(arrowstyle='<->', color='gray'))
ax[3, 1].text(0.5, 0.6, '$1/T$', ha='center', va='bottom', **text_style)
ax[3, 1].set_title('Dirac Comb')
for i, letter in enumerate('abcd'):
ax[i, 0].set_ylabel('({0})'.format(letter), rotation=0)
# Draw arrows between pairs of axes
for i in range(4):
left = ax[i, 0].bbox.inverse_transformed(fig.transFigure).bounds
right = ax[i, 1].bbox.inverse_transformed(fig.transFigure).bounds
x = 0.5 * (left[0] + left[2] + right[0])
y = left[1] + 0.5 * left[3]
fig.text(x, y, r'$\Longleftrightarrow$',
ha='center', va='center', size=30)
Explanation: This example also shows why it is somewhat strange to provide an uncertainty with a LS best-fit period. Errors tend to be catastrophic, and not some small fractional percentage, with the LS periodogram.
In the case of the above EB, the "best-fit" period was off by a factor 2. This is not isolated to EBs, however, LS periodograms frequently identify a correct harmonic of the true period, but not the actual period of variability.
Conclusions
The Lomb-Scargle periodogram is a useful tool to search for sinusoidal signals in noisy, irregular data.
However, as highlighted throughout, there are many ways in which the methodology can run awry.
In closing, I will summarize some practical considerations from VanderPlas (2017):
Choose an appropriate frequency grid (defaults in LombScargle are not sufficient)
Calculate the LS periodogram for the observation times to search for dominant signals (e.g., 1 day in astro)
Compute LS periodogram for data (avoid multi-Fourier models if signal unknown)
Plot periodogram and various FAP levels (do not over-interpret FAP)
If window function shows strong aliasing, plot phased light curve at each peak (now add more Fourier terms if necessary)
If looking for a particular signal (e.g., detatched EBs), consider different methods that better match expected signal
Inject fake signals into data to understand systematics if using LS in a survey pipeline
Finally, Finally
As a very last note: know that there are many different ways to search for periodicity in astronomical data. Depending on your application (and computational resources), LS may be a poor choice (even though this is often the default choice by all astronomers!) Graham et al. (2013) provides a summary of several methods using actual astronomical data. The results of that study show that no single method is best. However, they also show that no single method performs particularly well: the detection efficiences in Graham et al. (2013) are disappointing given the importance of periodicity in astronomical signals.
Period detection is a fundamental problem for astronomical time-series, but it is especially difficult in "production" mode. Be careful when setting up pipelines to analyze large datasets.
Challenge Problem
Alter gen_periodic_data to create signals with 4 harmonics.
Using our prescription for a simulated "realistic" astronomical cadence (see above), simulate 2 years worth of survey data for 1000 simulated stars with periods drawn randomly from [0.2, 10], amplitude=1, and variance of the noise=0.01.
Compare the best-fit LS period to the simulated period. Do you notice any particular trends?
Code from PracticalLombScargle by Jake Van der Plas
The functions below implement plotting routines developed by Jake to illustrate some properties of Fourier transforms.
This code is distributed under a BSD-3 licence, which is repeated below:
Copyright (c) 2015, Jake Vanderplas
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
Neither the name of PracticalLombScargle nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
End of explanation |
13,916 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I try to retrieve percentiles from an array with NoData values. In my case the Nodata values are represented by -3.40282347e+38. I thought a masked array would exclude this values (and other that is lower than 0)from further calculations. I succesfully create the masked array but for the np.percentile() function the mask has no effect. | Problem:
import numpy as np
DataArray = np.arange(-5.5, 10.5)
percentile = 50
mdata = np.ma.masked_where(DataArray < 0, DataArray)
mdata = np.ma.filled(mdata, np.nan)
prob = np.nanpercentile(mdata, percentile) |
13,917 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Social Network Analysis
Written by Jin Cheong & Luke Chang
Step1: Primer to Network Analysis
A network is made up of two main components
Step2: Now we can add a node using the .add_node('nodename') method
Step3: Notice there are no connections between the two nodes because we haven't added any edges yet.
To add edges between nodes, for example node1 and node 2, we can use the .add_edge('node1','node2') method on our graph.
Step4: Now you can see that we have an edge connecting Jin and Luke.
There are several different types of graphs.
weighted vs unweighted graphs
a graph is said to be unweighted if every connection is either 1 or 0
it is weighted if the edges take on other values indicating the strength of the relationship.
directed vs undirected graphs
a graphs is undirected if the edges are symmetrical
it is directed if the edges are asymmetrical, meaning that each edge can indicate the direction of the relationship
For simplicity, today we will only cover unweighted and undirected graphs.
There are multiple ways to indicate nodes and edges in a graph.
As illustrated above, we can manually add each node and edge. This approach is fine for small graphs, but won't scale well. It is also possible to add nodes is to use a python dictionary.
The key is the node and the values indicate what that key(node) connects to.
Step5: What should we look for in a network ?
Micromeasures
Centrality measures track the importance of a node derived from its position in the network.
There are four main groups of centrality depending on the type of statistics used
Step6: Luke has the highest number of connections and therefore has the highest degree centrality. Jin is next, and then Eshin.
Betweenness Centrality
Think of Between Centrality as a bottleneck or a broker of two separate networks.
The measure is the fraction of the total number of shortest paths a node lie on between two other nodes.
Here Eshin has the highest betweeness centrality because he connects the most different people.
Step7: Closeness Centrality
The closeness centrality measures how close a given node is to any other node.
This is measured by the average length of the shortest path between a node and all other nodes.
Thus you have higher closeness centrality if you can get to everyone faster than others.
Step8: Eigenvector Centrality
The underlying assumption of the Eigenvector centrality is that a node's importance is determined by how important its neighbors are.
For instance, having more influential friends (betweeness centrality) can be more important than just having more number of friends (degree centrality).
Now we can observe that Luke is back to being more important than Eshin (who had higher betweeness centrality), because Luke is friends with Eshin who also has high centrality.
Step9: Macromeasures
Macromeasures are useful to understand the network as a whole or to compare networks.
We will cover three fundamental measures of the network structure.
1. Degree distribution
2. Average shortest path
3. Clustering Coefficients
Degree Distribution
One fundamental characteristic is the distribution of degrees, number of edges for each node, which we can easily plot.
From this distribution we can discern attributes such as whether everyone is connected to everybody are there many lonely nodes and such.
Step10: Average Shortest Path
The average shortest path length is the average of calculating the shortest number of edges it takes for every pair of nodes in the network.
This can be a mesure of how efficient/fast the network is in the distribution of information, rumors, disease, etc.
Step11: Clustering coefficient.
Another way to look at how tight-knit a network is to look at how clustered it is.
To estimate clustering, we start from cliques, which can be considered as a subnetwork composed of three nodes forming a triangle.
We can first calculate for each node, how many cliques(triangles) it is a member of.
Then we calculate transitivity, which is simply the number of triangles in the network divided by the number of all possible traingles.
Step12: A similar approach is to calculate clustering coefficient for each node and for the graph.
Clustering is the measure of how likely it is if A has edges to B and C, that B and C related?
For instance, Charlie form a triangle with Luke and Jin, but Sam and Heidi don't form a single triangle.
We can calculate this for each node and then get an average value overall to get the average clustering coefficient. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
try:
import networkx as nx
except:
# Install NetworkX
!pip install networkx
Explanation: Social Network Analysis
Written by Jin Cheong & Luke Chang
End of explanation
# Initialize Graph object
G = nx.Graph()
Explanation: Primer to Network Analysis
A network is made up of two main components: nodes and edges.
The nodes are the individuals, words, or entities that compose a network, and the edges are the links that connect one node to another.
Here is an example of a node and an edge.
<img src="Figures/nodeedge.png">
Now we can try drawing our own network using the NetworkX package.
Let's start off by initializing a Network, and call it G.
End of explanation
G = nx.Graph()
G.add_node('Jin')
G.add_node('Luke')
G.add_node('Eshin')
plt.figure(figsize=(5,3))
nx.draw(G,with_labels=True,node_size=5000,font_size=20,alpha=.5)
Explanation: Now we can add a node using the .add_node('nodename') method
End of explanation
G = nx.Graph()
G.add_edge('Jin','Luke',weight=1)
G.add_edge('Jin','Eshin',weight=1)
G.add_edge('Luke','Eshin',weight=1)
plt.figure(figsize=(5,3))
pos = nx.spring_layout(G) # One way of specifiying a layout for nodes.
nx.draw(G,pos,with_labels=True,node_size=5000,font_size=20,alpha=.5)
Explanation: Notice there are no connections between the two nodes because we haven't added any edges yet.
To add edges between nodes, for example node1 and node 2, we can use the .add_edge('node1','node2') method on our graph.
End of explanation
d = {'Jin':['Luke','Eshin','Antonia','Andy','Sophie','Rob','Charlie','Vanessa'],
'Luke':['Eshin','Antonia','Seth','Andy','Sophie','Rob','Charlie','Vanessa'],
'Antonia':['Luke','Jin','Eshin','Seth','Andy'],
'Eshin':['Heidi','Antonia','Jin','Sam','Andy'],
'Sophie':['Rob','Vanessa'],
'Rob':['Sophie','Vanessa']}
G = nx.Graph(d)
plt.figure(figsize=(15,8))
np.random.seed(2) # Just to keep things same
pos = pos = nx.fruchterman_reingold_layout(G) # Another way of specifiying a layout for nodes.
nx.draw(G,with_labels=True,node_size=1500,font_size=20,alpha=.3,width=2)
Explanation: Now you can see that we have an edge connecting Jin and Luke.
There are several different types of graphs.
weighted vs unweighted graphs
a graph is said to be unweighted if every connection is either 1 or 0
it is weighted if the edges take on other values indicating the strength of the relationship.
directed vs undirected graphs
a graphs is undirected if the edges are symmetrical
it is directed if the edges are asymmetrical, meaning that each edge can indicate the direction of the relationship
For simplicity, today we will only cover unweighted and undirected graphs.
There are multiple ways to indicate nodes and edges in a graph.
As illustrated above, we can manually add each node and edge. This approach is fine for small graphs, but won't scale well. It is also possible to add nodes is to use a python dictionary.
The key is the node and the values indicate what that key(node) connects to.
End of explanation
def print_network(val_map,title=None):
print('\x1b[1;31m'+title+'\x1b[0m')
for k, v in val_map.iteritems():
print k.ljust(15), str(round(v,3)).ljust(30)
def plot_network(val_map, title=None):
values = [val_map.get(node, 0.25) for node in G.nodes()]
plt.figure(figsize=(12,5))
np.random.seed(2)
# pos = nx.spring_layout(G)
pos = nx.fruchterman_reingold_layout(G,dim=2)
ec = nx.draw_networkx_edges(G,pos,alpha=.2)
nc = nx.draw_networkx_nodes(G,pos,node_size=1000,with_labels=True,alpha=.5,cmap=plt.get_cmap('jet'),node_color=values)
nx.draw_networkx_labels(G,pos,font_size=18)
# nx.draw(G,pos,node_size=400,with_labels=True,alpha=.5,cmap=plt.get_cmap('jet'),node_color=values)
plt.colorbar(nc)
plt.axis('off')
plt.suptitle(title,fontsize=18)
plt.show()
# Get degree centrality values
d = nx.degree_centrality(G)
title = 'Degree Centrality Map'
# print centrality values
print_network(d,title)
# plot graph with values
plot_network(d,title)
Explanation: What should we look for in a network ?
Micromeasures
Centrality measures track the importance of a node derived from its position in the network.
There are four main groups of centrality depending on the type of statistics used:
1. degree - how connected a node is
2. betweenness - how important a node is in connecting other nodes
3. closeness - how easily a node can reach other nodes
4. neighbors' characteristics - how important, central, or influential a node's neighbors are.
<img src="Figures/centrality.png",width=300>
Reference:
Social and Economic Networks by Matthew O. Jackson. Princeton University Press.
Picture: Wikipedia article
Other examples: http://www.orgnet.com/sna.html
Degree Centrality
The degree centrality measures the number of edges of a given node.
In other words, it measures how connected a node is.
End of explanation
d = nx.betweenness_centrality(G)
title = "Betweenness Centrality"
print_network(d,title)
plot_network(d,title)
Explanation: Luke has the highest number of connections and therefore has the highest degree centrality. Jin is next, and then Eshin.
Betweenness Centrality
Think of Between Centrality as a bottleneck or a broker of two separate networks.
The measure is the fraction of the total number of shortest paths a node lie on between two other nodes.
Here Eshin has the highest betweeness centrality because he connects the most different people.
End of explanation
d = nx.closeness_centrality(G)
title = "Closeness Centrality"
print_network(d,title)
plot_network(d,title)
Explanation: Closeness Centrality
The closeness centrality measures how close a given node is to any other node.
This is measured by the average length of the shortest path between a node and all other nodes.
Thus you have higher closeness centrality if you can get to everyone faster than others.
End of explanation
d = nx.eigenvector_centrality(G)
title = "Eigenvector Centrality"
print_network(d,title)
plot_network(d,title)
Explanation: Eigenvector Centrality
The underlying assumption of the Eigenvector centrality is that a node's importance is determined by how important its neighbors are.
For instance, having more influential friends (betweeness centrality) can be more important than just having more number of friends (degree centrality).
Now we can observe that Luke is back to being more important than Eshin (who had higher betweeness centrality), because Luke is friends with Eshin who also has high centrality.
End of explanation
degree_sequence=sorted([d for n,d in G.degree().iteritems()], reverse=True) # degree sequence
G2=nx.complete_graph(5)
degree_sequence2=sorted([d for n,d in G2.degree().iteritems()], reverse=True) # degree sequence
fig, [(ax1,ax2),(ax3,ax4)] = plt.subplots(2,2,figsize=(15,10))
ax1.hist(degree_sequence,bins=range(0,7),rwidth=.8)
ax1.set_title("Degree Histogram for Graph 1",fontsize=16)
ax1.set_ylabel("Count",fontsize=14)
ax1.set_xlabel("Degree",fontsize=14)
ax1.set_xticks([d+0.4 for d in degree_sequence])
ax1.set_xticklabels([d for d in degree_sequence])
ax1.set_ylim((0,6))
nx.draw(G,pos=nx.circular_layout(G),ax=ax2)
ax2.set_title('Network 1: Graph of our class',fontsize=16)
ax3.hist(degree_sequence2,bins=range(0,7),rwidth=.8)
ax3.set_title("Degree Histogram for Complete Graph",fontsize=16)
ax3.set_ylabel("Count",fontsize=14)
ax3.set_xlabel("Degree",fontsize=14)
ax3.set_xticks([d+0.4 for d in degree_sequence2])
ax3.set_xticklabels([d for d in degree_sequence2])
ax3.set_ylim((0,6))
nx.draw(G2,pos=nx.circular_layout(G2),ax=ax4)
ax4.set_title('Network 2: Graph of a Complete Graph',fontsize=16)
plt.tight_layout(pad=0.4, w_pad=0.5, h_pad=1.0)
plt.show()
Explanation: Macromeasures
Macromeasures are useful to understand the network as a whole or to compare networks.
We will cover three fundamental measures of the network structure.
1. Degree distribution
2. Average shortest path
3. Clustering Coefficients
Degree Distribution
One fundamental characteristic is the distribution of degrees, number of edges for each node, which we can easily plot.
From this distribution we can discern attributes such as whether everyone is connected to everybody are there many lonely nodes and such.
End of explanation
print 'Network 1 average shortest path: ' + str(round(nx.average_shortest_path_length(G),2))
print 'Network 2 average shortest path: ' + str(nx.average_shortest_path_length(G2))
Explanation: Average Shortest Path
The average shortest path length is the average of calculating the shortest number of edges it takes for every pair of nodes in the network.
This can be a mesure of how efficient/fast the network is in the distribution of information, rumors, disease, etc.
End of explanation
d = nx.triangles(G)
print_network(d,'Number of cliques(triangles)')
print
print 'Transitivity : ' + str(nx.transitivity(G))
plot_network(d,'Transitivity')
Explanation: Clustering coefficient.
Another way to look at how tight-knit a network is to look at how clustered it is.
To estimate clustering, we start from cliques, which can be considered as a subnetwork composed of three nodes forming a triangle.
We can first calculate for each node, how many cliques(triangles) it is a member of.
Then we calculate transitivity, which is simply the number of triangles in the network divided by the number of all possible traingles.
End of explanation
d = nx.clustering(G)
print_network(d,'Clustering Coefficients per node')
print
print 'Average clustering coefficient : ' + str(nx.average_clustering(G))
Explanation: A similar approach is to calculate clustering coefficient for each node and for the graph.
Clustering is the measure of how likely it is if A has edges to B and C, that B and C related?
For instance, Charlie form a triangle with Luke and Jin, but Sam and Heidi don't form a single triangle.
We can calculate this for each node and then get an average value overall to get the average clustering coefficient.
End of explanation |
13,918 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Annotating continuous data
This tutorial describes adding annotations to a ~mne.io.Raw object,
and how annotations are used in later stages of data processing.
Step1: ~mne.Annotations in MNE-Python are a way of storing short strings of
information about temporal spans of a ~mne.io.Raw object. Below the
surface, ~mne.Annotations are list-like <list> objects,
where each element comprises three pieces of information
Step2: Notice that orig_time is None, because we haven't specified it. In
those cases, when you add the annotations to a ~mne.io.Raw object,
it is assumed that the orig_time matches the time of the first sample of
the recording, so orig_time will be set to match the recording
measurement date (raw.info['meas_date']).
Step3: Since the example data comes from a Neuromag system that starts counting
sample numbers before the recording begins, adding my_annot to the
~mne.io.Raw object also involved another automatic change
Step4: If you know that your annotation onsets are relative to some other time, you
can set orig_time before you call
Step5: <div class="alert alert-info"><h4>Note</h4><p>If your annotations fall outside the range of data times in the
`~mne.io.Raw` object, the annotations outside the data range will
not be added to ``raw.annotations``, and a warning will be issued.</p></div>
Now that your annotations have been added to a ~mne.io.Raw object,
you can see them when you visualize the ~mne.io.Raw object
Step6: The three annotations appear as differently colored rectangles because they
have different description values (which are printed along the top
edge of the plot area). Notice also that colored spans appear in the small
scroll bar at the bottom of the plot window, making it easy to quickly view
where in a ~mne.io.Raw object the annotations are so you can easily
browse through the data to find and examine them.
Annotating Raw objects interactively
Annotations can also be added to a ~mne.io.Raw object interactively
by clicking-and-dragging the mouse in the plot window. To do this, you must
first enter "annotation mode" by pressing
Step7: The colored rings are clickable, and determine which existing label will be
created by the next click-and-drag operation in the main plot window. New
annotation descriptions can be added by typing the new description,
clicking the
Step8: Notice that it is possible to create overlapping annotations, even when they
share the same description. This is not possible when annotating
interactively; click-and-dragging to create a new annotation that overlaps
with an existing annotation with the same description will cause the old and
new annotations to be merged.
Individual annotations can be accessed by indexing an
~mne.Annotations object, and subsets of the annotations can be
achieved by either slicing or indexing with a list, tuple, or array of
indices
Step9: You can also iterate over the annotations within an ~mne.Annotations
object
Step10: Note that iterating, indexing and slicing ~mne.Annotations all
return a copy, so changes to an indexed, sliced, or iterated element will not
modify the original ~mne.Annotations object.
Step11: Reading and writing Annotations to/from a file
~mne.Annotations objects have a | Python Code:
import os
from datetime import timedelta
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
raw.crop(tmax=60).load_data()
Explanation: Annotating continuous data
This tutorial describes adding annotations to a ~mne.io.Raw object,
and how annotations are used in later stages of data processing.
:depth: 1
As usual we'll start by importing the modules we need, loading some
example data <sample-dataset>, and (since we won't actually analyze the
raw data in this tutorial) cropping the ~mne.io.Raw object to just 60
seconds before loading it into RAM to save memory:
End of explanation
my_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['AAA', 'BBB', 'CCC'])
print(my_annot)
Explanation: ~mne.Annotations in MNE-Python are a way of storing short strings of
information about temporal spans of a ~mne.io.Raw object. Below the
surface, ~mne.Annotations are list-like <list> objects,
where each element comprises three pieces of information: an onset time
(in seconds), a duration (also in seconds), and a description (a text
string). Additionally, the ~mne.Annotations object itself also keeps
track of orig_time, which is a POSIX timestamp_ denoting a real-world
time relative to which the annotation onsets should be interpreted.
Creating annotations programmatically
If you know in advance what spans of the ~mne.io.Raw object you want
to annotate, ~mne.Annotations can be created programmatically, and
you can even pass lists or arrays to the ~mne.Annotations
constructor to annotate multiple spans at once:
End of explanation
raw.set_annotations(my_annot)
print(raw.annotations)
# convert meas_date (a tuple of seconds, microseconds) into a float:
meas_date = raw.info['meas_date']
orig_time = raw.annotations.orig_time
print(meas_date == orig_time)
Explanation: Notice that orig_time is None, because we haven't specified it. In
those cases, when you add the annotations to a ~mne.io.Raw object,
it is assumed that the orig_time matches the time of the first sample of
the recording, so orig_time will be set to match the recording
measurement date (raw.info['meas_date']).
End of explanation
time_of_first_sample = raw.first_samp / raw.info['sfreq']
print(my_annot.onset + time_of_first_sample)
print(raw.annotations.onset)
Explanation: Since the example data comes from a Neuromag system that starts counting
sample numbers before the recording begins, adding my_annot to the
~mne.io.Raw object also involved another automatic change: an offset
equalling the time of the first recorded sample (raw.first_samp /
raw.info['sfreq']) was added to the onset values of each annotation
(see time-as-index for more info on raw.first_samp):
End of explanation
time_format = '%Y-%m-%d %H:%M:%S.%f'
new_orig_time = (meas_date + timedelta(seconds=50)).strftime(time_format)
print(new_orig_time)
later_annot = mne.Annotations(onset=[3, 5, 7],
duration=[1, 0.5, 0.25],
description=['DDD', 'EEE', 'FFF'],
orig_time=new_orig_time)
raw2 = raw.copy().set_annotations(later_annot)
print(later_annot.onset)
print(raw2.annotations.onset)
Explanation: If you know that your annotation onsets are relative to some other time, you
can set orig_time before you call :meth:~mne.io.Raw.set_annotations,
and the onset times will get adjusted based on the time difference between
your specified orig_time and raw.info['meas_date'], but without the
additional adjustment for raw.first_samp. orig_time can be specified
in various ways (see the documentation of ~mne.Annotations for the
options); here we'll use an ISO 8601_ formatted string, and set it to be 50
seconds later than raw.info['meas_date'].
End of explanation
fig = raw.plot(start=2, duration=6)
Explanation: <div class="alert alert-info"><h4>Note</h4><p>If your annotations fall outside the range of data times in the
`~mne.io.Raw` object, the annotations outside the data range will
not be added to ``raw.annotations``, and a warning will be issued.</p></div>
Now that your annotations have been added to a ~mne.io.Raw object,
you can see them when you visualize the ~mne.io.Raw object:
End of explanation
fig.canvas.key_press_event('a')
Explanation: The three annotations appear as differently colored rectangles because they
have different description values (which are printed along the top
edge of the plot area). Notice also that colored spans appear in the small
scroll bar at the bottom of the plot window, making it easy to quickly view
where in a ~mne.io.Raw object the annotations are so you can easily
browse through the data to find and examine them.
Annotating Raw objects interactively
Annotations can also be added to a ~mne.io.Raw object interactively
by clicking-and-dragging the mouse in the plot window. To do this, you must
first enter "annotation mode" by pressing :kbd:a while the plot window is
focused; this will bring up the annotation controls window:
End of explanation
new_annot = mne.Annotations(onset=3.75, duration=0.75, description='AAA')
raw.set_annotations(my_annot + new_annot)
raw.plot(start=2, duration=6)
Explanation: The colored rings are clickable, and determine which existing label will be
created by the next click-and-drag operation in the main plot window. New
annotation descriptions can be added by typing the new description,
clicking the :guilabel:Add label button; the new description will be added
to the list of descriptions and automatically selected.
During interactive annotation it is also possible to adjust the start and end
times of existing annotations, by clicking-and-dragging on the left or right
edges of the highlighting rectangle corresponding to that annotation.
<div class="alert alert-danger"><h4>Warning</h4><p>Calling :meth:`~mne.io.Raw.set_annotations` **replaces** any annotations
currently stored in the `~mne.io.Raw` object, so be careful when
working with annotations that were created interactively (you could lose
a lot of work if you accidentally overwrite your interactive
annotations). A good safeguard is to run
``interactive_annot = raw.annotations`` after you finish an interactive
annotation session, so that the annotations are stored in a separate
variable outside the `~mne.io.Raw` object.</p></div>
How annotations affect preprocessing and analysis
You may have noticed that the description for new labels in the annotation
controls window defaults to BAD_. The reason for this is that annotation
is often used to mark bad temporal spans of data (such as movement artifacts
or environmental interference that cannot be removed in other ways such as
projection <tut-projectors-background> or filtering). Several
MNE-Python operations
are "annotation aware" and will avoid using data that is annotated with a
description that begins with "bad" or "BAD"; such operations typically have a
boolean reject_by_annotation parameter. Examples of such operations are
independent components analysis (mne.preprocessing.ICA), functions
for finding heartbeat and blink artifacts
(:func:~mne.preprocessing.find_ecg_events,
:func:~mne.preprocessing.find_eog_events), and creation of epoched data
from continuous data (mne.Epochs). See tut-reject-data-spans
for details.
Operations on Annotations objects
~mne.Annotations objects can be combined by simply adding them with
the + operator, as long as they share the same orig_time:
End of explanation
print(raw.annotations[0]) # just the first annotation
print(raw.annotations[:2]) # the first two annotations
print(raw.annotations[(3, 2)]) # the fourth and third annotations
Explanation: Notice that it is possible to create overlapping annotations, even when they
share the same description. This is not possible when annotating
interactively; click-and-dragging to create a new annotation that overlaps
with an existing annotation with the same description will cause the old and
new annotations to be merged.
Individual annotations can be accessed by indexing an
~mne.Annotations object, and subsets of the annotations can be
achieved by either slicing or indexing with a list, tuple, or array of
indices:
End of explanation
for ann in raw.annotations:
descr = ann['description']
start = ann['onset']
end = ann['onset'] + ann['duration']
print("'{}' goes from {} to {}".format(descr, start, end))
Explanation: You can also iterate over the annotations within an ~mne.Annotations
object:
End of explanation
# later_annot WILL be changed, because we're modifying the first element of
# later_annot.onset directly:
later_annot.onset[0] = 99
# later_annot WILL NOT be changed, because later_annot[0] returns a copy
# before the 'onset' field is changed:
later_annot[0]['onset'] = 77
print(later_annot[0]['onset'])
Explanation: Note that iterating, indexing and slicing ~mne.Annotations all
return a copy, so changes to an indexed, sliced, or iterated element will not
modify the original ~mne.Annotations object.
End of explanation
raw.annotations.save('saved-annotations.csv')
annot_from_file = mne.read_annotations('saved-annotations.csv')
print(annot_from_file)
Explanation: Reading and writing Annotations to/from a file
~mne.Annotations objects have a :meth:~mne.Annotations.save method
which can write :file:.fif, :file:.csv, and :file:.txt formats (the
format to write is inferred from the file extension in the filename you
provide). There is a corresponding :func:~mne.read_annotations function to
load them from disk:
End of explanation |
13,919 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Image Classification With Keras Convolutional Neural Network
Step2: Configuration and Hyperparameters
First let's go ahead and define our custom early stopping class, which will be used in the hyperparameters
Step3: Then we'll set all the relevant paths and configurations
Step4: Helper Functions For Loading Data
Step5: Helper Function For Plotting Images
Function used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.
Step6: Build Model
We use the VGG16 model and pretrained weights for its simplicity and consistent performance
Step7: Before we start training, we use the bottleneck method to extract features from the images in our dataset. We save them as .npy files.
Step8: Then we train a base model on these features.
Step9: Main Training Function
Step10: Helper Functions For Making Predictions
Step11: Run the training and prediction code
Step12: Performance Metrics
Step13: Model Summary & Feature Visualization
Step14: Save Model
Step15: MISC
Script for adding augmented images to dataset using keras ImageDataGenerator | Python Code:
from __future__ import print_function, division
import numpy as np
import random
import os
import glob
import cv2
import datetime
import pandas as pd
import time
import h5py
import csv
from scipy.misc import imresize, imsave
from sklearn.cross_validation import KFold, train_test_split
from sklearn.metrics import log_loss, confusion_matrix
from sklearn.utils import shuffle
from PIL import Image, ImageChops, ImageOps
import matplotlib.pyplot as plt
from keras import backend as K
from keras.callbacks import EarlyStopping, Callback
from keras.utils import np_utils
from keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array
from keras import optimizers
from keras.models import Sequential, model_from_json
from keras.layers import Convolution2D, MaxPooling2D, ZeroPadding2D, Activation, Dropout, Flatten, Dense
%matplotlib inline
Explanation: Image Classification With Keras Convolutional Neural Network
End of explanation
class EarlyStoppingByLossVal(Callback):
Custom class to set a val loss target for early stopping
def __init__(self, monitor='val_loss', value=0.45, verbose=0):
super(Callback, self).__init__()
self.monitor = monitor
self.value = value
self.verbose = verbose
def on_epoch_end(self, epoch, logs={}):
current = logs.get(self.monitor)
if current is None:
warnings.warn("Early stopping requires %s available!" % self.monitor, RuntimeWarning)
if current < self.value:
if self.verbose > 0:
print("Epoch %05d: early stopping THR" % epoch)
self.model.stop_training = True
Explanation: Configuration and Hyperparameters
First let's go ahead and define our custom early stopping class, which will be used in the hyperparameters
End of explanation
### paths to training and testing data
train_path = 'C:/Projects/playground/kaggle/dogs_vs_cats/data_no_split/train'
test_path = 'C:/Projects/playground/kaggle/dogs_vs_cats/data_no_split/test'
### path for preloaded vgg16 weights
weights_path = 'C:/Projects/playground/kaggle/dogs_vs_cats/vgg16_weights.h5'
bottleneck_model_weights_path = 'C:/Projects/playground/kaggle/dogs_vs_cats/bottleneck_weights.h5'
### settings for keras early stopping callback
early_stopping = EarlyStopping(monitor='val_loss', patience=1, mode='auto')
# early_stopping = EarlyStoppingByLossVal(verbose=2, value=0.3)
### other hyperparameters
n_folds = 2
batch_size = 16
nb_epoch = 50
bottleneck_epoch = 3 # used when training bottleneck model
val_split = .15 # if not using kfold cv
classes = ["dog", "cat"]
num_classes = len(classes)
### image dimensions
img_width, img_height = 250, 250
num_channels = 3
Explanation: Then we'll set all the relevant paths and configurations
End of explanation
def load_images(path):
img = cv2.imread(path)
resized = cv2.resize(img, (img_width, img_height), cv2.INTER_LINEAR)
return resized
def load_train():
X_train = []
X_train_id = []
y_train = []
start_time = time.time()
print('Loading training images...')
folders = ["dogs", "cats"]
for fld in folders:
index = folders.index(fld)
print('Loading {} files (Index: {})'.format(fld, index))
path = os.path.join(train_path, fld, '*g')
files = glob.glob(path)
for fl in files:
flbase = os.path.basename(fl)
img = load_images(fl)
X_train.append(img)
X_train_id.append(flbase)
y_train.append(index)
print('Training data load time: {} seconds'.format(round(time.time() - start_time, 2)))
return X_train, y_train, X_train_id
def load_test():
path = os.path.join(test_path, 'test', '*.jpg')
files = sorted(glob.glob(path))
X_test = []
X_test_id = []
for fl in files:
flbase = os.path.basename(fl)
img = load_images(fl)
X_test.append(img)
X_test_id.append(flbase)
return X_test, X_test_id
def normalize_train_data():
train_data, train_target, train_id = load_train()
train_data = np.array(train_data, dtype=np.uint8)
train_target = np.array(train_target, dtype=np.uint8)
train_data = train_data.transpose((0, 3, 1, 2))
train_data = train_data.astype('float32')
train_data = train_data / 255
train_target = np_utils.to_categorical(train_target, num_classes)
print('Shape of training data:', train_data.shape)
return train_data, train_target, train_id
def normalize_test_data():
start_time = time.time()
test_data, test_id = load_test()
test_data = np.array(test_data, dtype=np.uint8)
test_data = test_data.transpose((0, 3, 1, 2))
test_data = test_data.astype('float32')
test_data = test_data / 255
print('Shape of testing data:', test_data.shape)
return test_data, test_id
train_data, train_target, train_id = normalize_train_data()
Explanation: Helper Functions For Loading Data
End of explanation
def plot_images(images, cls_true, cls_pred=None):
if len(images) == 0:
print("no images to show")
return
else:
random_indices = random.sample(range(len(images)), min(len(images), 9))
images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices])
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
image = images[i].transpose((1, 2, 0))
ax.imshow(image)
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
Explanation: Helper Function For Plotting Images
Function used to plot 9 images in a 3x3 grid (or fewer, depending on how many images are passed), and writing the true and predicted classes below each image.
End of explanation
def build_model():
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=(3, img_width, img_height)))
model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
# load the weights of the VGG16 networks
f = h5py.File(weights_path)
for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
# we don't look at the last (fully-connected) layers in the savefile
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights)
f.close()
# build a classifier model to put on top of the convolutional model
bottleneck_model = Sequential()
bottleneck_model.add(Flatten(input_shape=model.output_shape[1:]))
bottleneck_model.add(Dense(256, activation='relu'))
bottleneck_model.add(Dropout(0.5))
bottleneck_model.add(Dense(num_classes, activation='softmax'))
# load weights from bottleneck model
bottleneck_model.load_weights(bottleneck_model_weights_path)
# add the model on top of the convolutional base
model.add(bottleneck_model)
# set the first 25 layers (up to the last conv block)
# to non-trainable (weights will not be updated)
for layer in model.layers[:25]:
layer.trainable = False
# compile the model with a SGD/momentum optimizer
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.SGD(lr=1e-4, momentum=0.9))
return model
Explanation: Build Model
We use the VGG16 model and pretrained weights for its simplicity and consistent performance
End of explanation
def save_bottleneck_features():
datagen = ImageDataGenerator(rescale=1./255)
# build the VGG16 network
model = Sequential()
model.add(ZeroPadding2D((1, 1), input_shape=(3, img_width, img_height)))
model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(64, 3, 3, activation='relu', name='conv1_2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(128, 3, 3, activation='relu', name='conv2_2'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(256, 3, 3, activation='relu', name='conv3_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv4_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_1'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_2'))
model.add(ZeroPadding2D((1, 1)))
model.add(Convolution2D(512, 3, 3, activation='relu', name='conv5_3'))
model.add(MaxPooling2D((2, 2), strides=(2, 2)))
# load the weights of the VGG16 networks
f = h5py.File(weights_path)
for k in range(f.attrs['nb_layers']):
if k >= len(model.layers):
# we don't look at the last (fully-connected) layers in the savefile
break
g = f['layer_{}'.format(k)]
weights = [g['param_{}'.format(p)] for p in range(g.attrs['nb_params'])]
model.layers[k].set_weights(weights)
f.close()
print('Model loaded.')
# create validation split
X_train, X_valid, Y_train, Y_valid = train_test_split(train_data, train_target, test_size=val_split)
# create generator for train data
generator = datagen.flow(
X_train,
Y_train,
batch_size=batch_size,
shuffle=False)
# save train features to .npy file
bottleneck_features_train = model.predict_generator(generator, X_train.shape[0])
np.save(open('bottleneck_features_train.npy', 'wb'), bottleneck_features_train)
# create generator for validation data
generator = datagen.flow(
X_valid,
Y_valid,
batch_size=batch_size,
shuffle=False)
# save validation features to .npy file
bottleneck_features_validation = model.predict_generator(generator, X_valid.shape[0])
np.save(open('bottleneck_features_validation.npy', 'wb'), bottleneck_features_validation)
return Y_train, Y_valid
Explanation: Before we start training, we use the bottleneck method to extract features from the images in our dataset. We save them as .npy files.
End of explanation
def train_bottleneck_model():
train_labels, validation_labels = save_bottleneck_features()
train_data = np.load(open('bottleneck_features_train.npy', 'rb'))
validation_data = np.load(open('bottleneck_features_validation.npy', 'rb'))
model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
model.fit(train_data,
train_labels,
nb_epoch=bottleneck_epoch,
batch_size=batch_size,
validation_data=(validation_data, validation_labels),
callbacks=[early_stopping],
verbose=2)
model.save_weights(bottleneck_model_weights_path)
return model
# train_bottleneck_model() # leave this commented out once it's been done once -- takes a while to run
Explanation: Then we train a base model on these features.
End of explanation
def run_train(n_folds=n_folds):
num_fold = 0
# sum_score = 0
models = []
callbacks = [
early_stopping
]
### if we just want to train a single model without cross-validation, set n_folds to 0 or None
if not n_folds:
model = build_model()
X_train, X_valid, Y_train, Y_valid = train_test_split(train_data, train_target, test_size=val_split)
print('Training...')
print('Size of train split: ', len(X_train), len(Y_train))
print('Size of validation split: ', len(X_valid), len(Y_valid))
model.fit(X_train,
Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
shuffle=True,
verbose=1,
validation_data=(X_valid, Y_valid),
callbacks=callbacks)
predictions_valid = model.predict(X_valid.astype('float32'), batch_size=batch_size, verbose=2)
# score = log_loss(Y_valid, predictions_valid)
# print('Loss: ', score)
# sum_score += score
models.append(model)
else:
kf = KFold(len(train_id), n_folds=n_folds, shuffle=True, random_state=7)
for train_index, test_index in kf:
model = build_model()
X_train = train_data[train_index]
Y_train = train_target[train_index]
X_valid = train_data[test_index]
Y_valid = train_target[test_index]
num_fold += 1
print('Training on fold {} of {}...'.format(num_fold, n_folds))
print('Size of train split: ', len(X_train), len(Y_train))
print('Size of validation split: ', len(X_valid), len(Y_valid))
model.fit(X_train,
Y_train,
batch_size=batch_size,
nb_epoch=nb_epoch,
shuffle=True,
verbose=1,
validation_data=(X_valid, Y_valid),
callbacks=callbacks)
# predictions_valid = model.predict(X_valid.astype('float32'), batch_size=batch_size, verbose=2)
# score = log_loss(Y_valid, predictions_valid)
# print('Loss for fold {0}: '.format(num_fold), score)
# sum_score += score*len(test_index)
models.append(model)
# score = sum_score/len(train_data)
# print("Average loss across folds: ", score)
info_string = "{0}fold_{1}x{2}_{3}epoch_patience_vgg16".format(n_folds, img_width, img_height, nb_epoch)
return info_string, models
Explanation: Main Training Function
End of explanation
def create_submission(predictions, test_id, info):
result = pd.DataFrame(predictions, columns=classes)
result.loc[:, 'id'] = pd.Series(test_id, index=result.index)
result = result[["id", "dog"]].rename(columns={"dog": "label"})
now = datetime.datetime.now()
sub_file = info + '.csv'
result.to_csv(sub_file, index=False)
def merge_several_folds_mean(data, n_folds):
a = np.array(data[0])
for i in range(1, n_folds):
a += np.array(data[i])
a /= n_folds
return a.tolist()
def ensemble_predict(info_string, models):
num_fold = 0
yfull_test = []
test_id = []
n_folds = len(models)
for i in range(n_folds):
model = models[i]
num_fold += 1
print('Predicting on fold {} of {}'.format(num_fold, n_folds))
test_data, test_id = normalize_test_data()
test_prediction = model.predict(test_data, batch_size=batch_size, verbose=2)
yfull_test.append(test_prediction)
preds = merge_several_folds_mean(yfull_test, n_folds)
create_submission(preds, test_id, info_string)
Explanation: Helper Functions For Making Predictions
End of explanation
info_string, models = run_train()
ensemble_predict(info_string)
Explanation: Run the training and prediction code
End of explanation
model = random.choice(models)
### or choose one manually...
# model = models[1]
# perm = np.arange(int(val_split*len(train_target)))
# np.random.shuffle(perm)
# sample_valid = train_data[perm]
# labels_valid = train_target[perm]
ixs = [random.randint(0, len(train_target)) for i in range(1000)]
sample_valid = np.array([train_data[ix] for ix in ixs])
labels_valid = np.array([train_target[ix] for ix in ixs])
def plot_example_errors(cls_pred, correct):
# This function is called from print_validation_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the validation set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the validation set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the validation set that have been
# incorrectly classified.
images = sample_valid[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
labels = np.array([classes[np.argmax(x)] for x in labels_valid])
cls_true = labels[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
def plot_confusion_matrix(cls_pred):
# This is called from print_validation_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the validation set.
# Get the true classifications for the test-set.
cls_true = [classes[np.argmax(x)] for x in labels_valid]
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred,
labels=classes)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, classes)
plt.yticks(tick_marks, classes)
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def print_validation_accuracy(show_example_errors=False,
show_confusion_matrix=False):
test_batch_size = 4
# Number of images in the validation set.
num_test = len(labels_valid)
cls_pred = np.zeros(shape=num_test, dtype=np.int)
i = 0
# iterate through batches and create list of predictions
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = sample_valid[i:j, :]
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = [np.argmax(x) for x in model.predict(images)]
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the validation set.
cls_pred = np.array([classes[x] for x in cls_pred])
cls_true = np.array([classes[np.argmax(x)] for x in labels_valid])
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on validation set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
print_validation_accuracy(show_example_errors=True, show_confusion_matrix=True)
Explanation: Performance Metrics
End of explanation
model.summary()
layer_name = 'conv5_3'
# util function to convert a tensor into a valid image
def deprocess_image(x):
# normalize tensor: center on 0., ensure std is 0.1
x -= x.mean()
x /= (x.std() + 1e-5)
x *= 0.1
# clip to [0, 1]
x += 0.5
x = np.clip(x, 0, 1)
# convert to RGB array
x *= 255
if K.image_dim_ordering() == 'th':
x = x.transpose((1, 2, 0))
x = np.clip(x, 0, 255).astype('uint8')
return x
# this is the placeholder for the input images
input_img = model.input
# get the symbolic outputs of each "key" layer (we gave them unique names).
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
def normalize(x):
# utility function to normalize a tensor by its L2 norm
return x / (K.sqrt(K.mean(K.square(x))) + 1e-5)
kept_filters = []
for filter_index in range(0, 512):
print('Processing filter %d' % filter_index)
start_time = time.time()
# we build a loss function that maximizes the activation
# of the nth filter of the layer considered
layer_output = layer_dict[layer_name].output
if K.image_dim_ordering() == 'th':
loss = K.mean(layer_output[:, filter_index, :, :])
else:
loss = K.mean(layer_output[:, :, :, filter_index])
# we compute the gradient of the input picture wrt this loss
grads = K.gradients(loss, input_img)[0]
# normalization trick: we normalize the gradient
grads = normalize(grads)
# this function returns the loss and grads given the input picture
iterate = K.function([input_img], [loss, grads])
# step size for gradient ascent
step = 1.
# we start from a gray image with some random noise
if K.image_dim_ordering() == 'th':
input_img_data = np.random.random((1, 3, img_width, img_height))
else:
input_img_data = np.random.random((1, img_width, img_height, 3))
input_img_data = (input_img_data - 0.5) * 20 + 128
# we run gradient ascent for 20 steps
for i in range(20):
loss_value, grads_value = iterate([input_img_data])
input_img_data += grads_value * step
if loss_value <= 0.:
# some filters get stuck to 0, we can skip them
break
# decode the resulting input image
if loss_value > 0:
img = deprocess_image(input_img_data[0])
kept_filters.append((img, loss_value))
end_time = time.time()
print('Filter %d processed in %ds' % (filter_index, end_time - start_time))
# we will stich the best n**2 filters on a n x n grid.
n = 5
# the filters that have the highest loss are assumed to be better-looking.
# we will only keep the top n**2 filters.
kept_filters.sort(key=lambda x: x[1], reverse=True)
kept_filters = kept_filters[:n * n]
# build a black picture with enough space for
# our n x n filters of size with a 5px margin in between
margin = 5
width = n * img_width + (n - 1) * margin
height = n * img_height + (n - 1) * margin
stitched_filters = np.zeros((width, height, 3))
# fill the picture with our saved filters
for i in range(n):
for j in range(n):
img, loss = kept_filters[i * n + j]
stitched_filters[(img_width + margin) * i: (img_width + margin) * i + img_width,
(img_height + margin) * j: (img_height + margin) * j + img_height, :] = img
# save image and display
imsave('feats.jpg', stitched_filters)
plt.imshow(stitched_filters)
Explanation: Model Summary & Feature Visualization
End of explanation
### if we like this model, save the weights
model.save_weights("favorite_model.h5")
Explanation: Save Model
End of explanation
### augmentation script
# train_path = 'C:/Projects/playground/kaggle/fish/data_aug/train/YFT/'
# ## define data preparation
# datagen = ImageDataGenerator(
# width_shift_range=.1,
# )
# ## fit parameters from data
# generator = datagen.flow_from_directory(
# train_path,
# target_size=(512, 512),
# class_mode=None,
# batch_size=335,
# shuffle=True,
# save_to_dir=train_path,
# save_prefix="aug_"
# )
# for X_batch, y_batch in generator:
# break
### Test on single image
path_to_lucy = "C:/Projects/playground/neural_style_transfer/images/inputs/content/loo_grass.jpg"
img = load_img(path_to_lucy)
plt.imshow(img)
img = imresize(img, (img_width, img_height))
img = img_to_array(img)
img.shape
img = img.reshape(1, 3, 250, 250)
print("This is a {0}.".format(classes[model.predict_classes(img)[0]]))
Explanation: MISC
Script for adding augmented images to dataset using keras ImageDataGenerator
End of explanation |
13,920 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PanSTARRS - WISE cross-match
Step1: Load the data
Load the catalogues
Step2: Restrict the study to the well sampled area
Step3: Coordinates
As we will use the coordinates to make a cross-match we to load them
Step5: Cross-match and random cross-match
We create an instance of Q_0 using as the input the two catalogues and the area. It will compute the $Q_0$ for different radius.
The following function is not used but shows the internal code use for the computing of the $Q_0$.
Step6: $Q_0$ dependence on the radius
We will iterate 10 times for each radius. However, the error is so small that a direct computation can be accurate to 4 significant figures.
Step7: The radius tested ranges from 1 to 25 | Python Code:
import numpy as np
from astropy.table import Table
from astropy import units as u
from astropy.coordinates import SkyCoord, search_around_sky
from mltier1 import generate_random_catalogue, Field, Q_0
%load_ext autoreload
%pylab inline
field = Field(170.0, 190.0, 45.5, 56.5)
Explanation: PanSTARRS - WISE cross-match: Compute the $Q_0$
End of explanation
panstarrs_full = Table.read("panstarrs_u2.fits")
wise_full = Table.read("wise_u2.fits")
Explanation: Load the data
Load the catalogues
End of explanation
panstarrs = field.filter_catalogue(
panstarrs_full,
colnames=("raMean", "decMean"))
# Free memory
del panstarrs_full
wise = field.filter_catalogue(
wise_full,
colnames=("raWise", "decWise"))
# Free memory
del wise_full
Explanation: Restrict the study to the well sampled area
End of explanation
coords_panstarrs = SkyCoord(panstarrs['raMean'], panstarrs['decMean'], unit=(u.deg, u.deg), frame='icrs')
coords_wise = SkyCoord(wise['raWise'], wise['decWise'], unit=(u.deg, u.deg), frame='icrs')
Explanation: Coordinates
As we will use the coordinates to make a cross-match we to load them
End of explanation
# Example function (not used, we use a class that contains this code)
def q_0_r(coords_wise, coords_panstarrs, field, radius=5):
Compute the Q_0 for a given radius
random_wise = field.random_catalogue(len(coords_wise))
idx_random_wise, idx_panstarrs, d2d, d3d = search_around_sky(
random_wise, coords_panstarrs, radius*u.arcsec)
nomatch_random = len(coords_wise) - len(np.unique(idx_random_wise))
idx_wise, idx_panstarrs, d2d, d3d = search_around_sky(
coords_wise, coords_panstarrs, radius*u.arcsec)
nomatch_wise = len(coords_wise) - len(np.unique(idx_wise))
return (1. - float(nomatch_wise)/float(nomatch_random))
q_0_comp = Q_0(coords_wise, coords_panstarrs, field)
q_0_comp(radius=5)
Explanation: Cross-match and random cross-match
We create an instance of Q_0 using as the input the two catalogues and the area. It will compute the $Q_0$ for different radius.
The following function is not used but shows the internal code use for the computing of the $Q_0$.
End of explanation
n_iter = 10
Explanation: $Q_0$ dependence on the radius
We will iterate 10 times for each radius. However, the error is so small that a direct computation can be accurate to 4 significant figures.
End of explanation
rads = list(range(1,26))
q_0_rad = []
for radius in rads:
q_0_rad_aux = []
for i in range(n_iter):
out = q_0_comp(radius=radius)
q_0_rad_aux.append(out)
q_0_rad.append(np.mean(q_0_rad_aux))
print("{:2d} {:7.5f} +/- {:7.5f} [{:7.5f} {:7.5f}]".format(radius,
np.mean(q_0_rad_aux), np.std(q_0_rad_aux),
np.min(q_0_rad_aux), np.max(q_0_rad_aux)))
plt.rcParams["figure.figsize"] = (5,5)
plot(rads, q_0_rad)
xlabel("Radius (arcsecs)")
ylabel("$Q_0$")
ylim([0, 1]);
Explanation: The radius tested ranges from 1 to 25
End of explanation |
13,921 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-1
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
13,922 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Work on getting $E$ vs $\theta$ plot parameters w/ 3 extrema
Step1: Generalized Landau Model of Ferroelectric Liquid Crystals
Step2: $f(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}+\dfrac{1}{4}u_{p}p^{4}+\dfrac{1}{6}v_{p}p^{6}-\gamma cp-\dfrac{1}{2}ec^{2}p^{2}-Ep$
'Our' version of the equation
$P_x \neq 0 = p$
$P_y = 0$
$\xi_1 = 0$
$\xi_2 \neq 0 = c$
$r_p = \dfrac{1}{\epsilon}$
$u_p = \eta$
$\gamma = -C$
$e = \dfrac{1}{2}\Omega$
Missing
Step3: Electrically Induced Tilt in Achiral Bent-Core Liquid Crystals
Step4: Mapping
Step5: $\dfrac{\partial f}{\partial c} = 0$
Step6: Solve for $p$
Step7: Solve $\dfrac{\partial f}{\partial p} = 0$ for $E$
Step8: Sub $p(c)$ into $E(c,p)$
Step9: Define above equation for plotting
Step10: $\dfrac{\partial f}{\partial p} = 0$
Step11: Solve for $c$
Step12: Sub $c(p)$ into $p(c)$
Step13: Solve for $E(p)$
Step14: Define above equation for plotting | Python Code:
%matplotlib inline
from sympy import *
import matplotlib.pyplot as plt
import numpy as np
init_printing(use_unicode=True)
r, u, v, c, r_c, u_c, v_c, E, p, r_p, u_p, v_p, e, a, b, q, b_0, b_1, b_2, b_3, q_0, q_1, q_2, q_3, q_4, q_5, beta, rho, epsilon, delta, d, K_3, Omega, Lambda, lamda, C, mu, Gamma, tau, nu, xi, P_x, eta, varep, gamma, P_0, theta_0, z, a_0, alpha_0, alpha, T_p, T_c, T = symbols('r u v c r_c u_c v_c E p r_p u_p v_p e a b q b_0 b_1 b_2 b_3 q_0 q_1 q_2 q_3 q_4 q_5 beta rho epsilon delta d K_3 Omega Lambda lamda C mu Gamma tau nu xi_2 P_x eta varepsilon gamma P_0 theta_0 z a_0 alpha_0 alpha T_p T_c T')
eptil, atil, btil, ctil, Ctil, gtil, thetatil, Ptil, gprm, thetaprm, Pprm, Tprm, chiprm = symbols('epsilontilde atilde btilde ctilde Ctilde gtilde_{0} thetatilde_{0} Ptilde_{0} gprm thetaprm Pprm Tprm chiprm')
Explanation: Work on getting $E$ vs $\theta$ plot parameters w/ 3 extrema
End of explanation
def g1(a,b,q,xi,P_x,varep,eta,C,Omega):
return (a*xi**2)/2+(b*xi**4)/4+(q*xi**6)/6+P_x**2/(2*varep)+(eta*P_x**4)/4+C*P_x*xi-(Omega*(P_x*xi)**2)/2
g1(a,b,q,xi,P_x,varep,eta,C,Omega)
g = (a*xi**2)/2+(b*xi**4)/4+(q*xi**6)/6+P_x**2/(2*epsilon)+(eta*P_x**4)/4+C*P_x*xi-(Omega*(P_x*xi)**2)/2
g
Explanation: Generalized Landau Model of Ferroelectric Liquid Crystals
End of explanation
g.subs([(epsilon,1/r_p),(eta,u_p),(C,-gamma),(Omega,2*e),(xi,c),(P_x,p)])
Explanation: $f(c,p) = \dfrac{1}{2}r_{c}c^{2}+\dfrac{1}{4}u_{c}c^{4}+\dfrac{1}{6}v_{c}c^{6}+\dfrac{1}{2}r_{p}p^{2}+\dfrac{1}{4}u_{p}p^{4}+\dfrac{1}{6}v_{p}p^{6}-\gamma cp-\dfrac{1}{2}ec^{2}p^{2}-Ep$
'Our' version of the equation
$P_x \neq 0 = p$
$P_y = 0$
$\xi_1 = 0$
$\xi_2 \neq 0 = c$
$r_p = \dfrac{1}{\epsilon}$
$u_p = \eta$
$\gamma = -C$
$e = \dfrac{1}{2}\Omega$
Missing: $p^6$ and $E$ terms
End of explanation
fc = a*c**2+b*c**4+q*c**6
fp = alpha*p**2+beta*p**4+gamma*p**6-E*p
fcp = -Omega*(p*c)**2
fc
fp
fcp
Explanation: Electrically Induced Tilt in Achiral Bent-Core Liquid Crystals
End of explanation
# fc = fc.subs([(a_0*(T-T_c),a)])#,6e4),(b,10**4),(q,9e7)])
# fc
fp = fp.subs([(gamma,0)])#,(alpha_0*(T-T_p),alpha),(beta,10**4)])
fp
Explanation: Mapping:
$\dfrac{1}{2}r_c = a = a_{0}(T-T_c),\ a_{0} = 6\times10^4$ $\text{ N/K m}^2$
$\dfrac{1}{4}u_c = b = 10^4 \text{ N/m}^2$
$\dfrac{1}{6}v_c = q = 9\times10^7 \text{ N/m}^2$
$\dfrac{1}{2}r_p = \alpha = \alpha_{0}(T-T_p) = 0.6\times10^9 \text{ J m/C}^2$
$\dfrac{1}{4}u_p = \beta \approx 10^4 \dfrac{ J m^5}{C^4}$
$\dfrac{1}{6}v_p = \gamma = 0$
$e = \Omega \approx 10^{10}-10^{11} \text{ N/C}$
End of explanation
collect((fc+fp+fcp).diff(c),c)
Explanation: $\dfrac{\partial f}{\partial c} = 0$
End of explanation
pc = solve((fc+fp+fcp).diff(c),p)[1]
pc
Explanation: Solve for $p$
End of explanation
solve((fc+fp+fcp).diff(p),E)[0]
Explanation: Solve $\dfrac{\partial f}{\partial p} = 0$ for $E$
End of explanation
(solve((fc+fp+fcp).diff(p),E)[0]).subs(p,pc)
simplify(series((solve((fc+fp+fcp).diff(p),E)[0]).subs(p,pc),c,n=7))
Explanation: Sub $p(c)$ into $E(c,p)$
End of explanation
def Ecc(a,b,q,c,Omega,alpha,beta):
return [2*np.sqrt((a+2*b*c**2+3*q*c**4)/Omega)*(alpha-Omega*c**2+(2*beta/Omega)*(a+2*b*c**2+3*q*c**4)),
(Omega*a**3*(4*beta*(a/Omega)**(3/2)+2*alpha*np.sqrt(a/Omega))
+2*(a*c)**2*np.sqrt(a/Omega)*(-Omega**2*a+Omega*alpha*b+6*a*b*beta)
+a*c**4*np.sqrt(a/Omega)*(Omega*a*(-2*Omega*b+3*alpha*q)-Omega*alpha*b**2+18*a**2*beta*q+6*a*b**2*beta)
+c**60*np.sqrt(a/Omega)*(-3*q*(Omega*a)**2+Omega*a*b*(Omega*b-3*alpha*q)+Omega*alpha*b**3+18*a**2*b*beta*q-2*a*b**3*beta))/(Omega*a**3)]
def coeff0(a,alpha,beta,Omega):
return 4*beta*(a/Omega)**(3/2)+2*alpha*np.sqrt(a/Omega)
def coeff2(a,Omega,alpha,b,beta):
return 2*a**2*np.sqrt(a/Omega)*(-Omega**2*a+Omega*alpha*b+6*a*b*beta)/(Omega*a**3)
def coeff4(a, Omega, b, alpha, q, beta):
return a*np.sqrt(a/Omega)*(Omega*a*(-2*Omega*b+3*alpha*q)-Omega*alpha*b**2+18*a**2*beta*q+6*a*b**2*beta)/(Omega*a**3)
def coeff6(a, Omega, b, alpha, q, beta):
return np.sqrt(a/Omega)*(-3*q*(Omega*a)**2+Omega*a*b*(Omega*b-3*alpha*q)+Omega*alpha*b**3+18*a**2*b*beta*q-2*a*b**3*beta)/(Omega*a**3)
a = 6e5*(116-114.1)
Omega = 8e4
q = 1
b = 1.01e3
alpha = 0.5e3
beta = 1.05e6
coeff0(a,alpha,beta,Omega)
coeff2(a,Omega,alpha,b,beta)
coeff4(a, Omega, b, alpha, q, beta)
coeff6(a, Omega, b, alpha, q, beta)
plt.figure(figsize=(11,8))
plt.plot(np.linspace(-17.5,17.5,201),Ecc(a,b,q,np.linspace(-17.5,17.5,201),Omega,alpha,beta)[0],label='$\mathregular{E}$')
# plt.plot(np.linspace(-17.5,17.5,201),Ecc(a,b,q,np.linspace(-17.5,17.5,201),Omega,alpha,beta)[1])
plt.scatter(0,Ecc(a,b,q,0,Omega,alpha,beta)[0],label='$\mathregular{E_{th}}$')
plt.ylim(2.259e8,2.2595e8),plt.xlim(-4,4)
plt.xlabel(c,fontsize=18)
plt.ylabel(E,fontsize=18,rotation='horizontal',labelpad=25)
plt.legend(loc='lower right',fontsize=18);
150000+2.258e8
plt.figure(figsize=(11,8))
plt.scatter(np.linspace(-17.5,17.5,201),Ecc(6e4*(116-114.1),10**4,9e7,np.linspace(-17.5,17.5,201),10**10,0.6e9,10**4)[1],color='r',label='expand')
plt.ylim(-0.5e14,0.25e14),plt.xlim(-5,5)
plt.xlabel('c',fontsize=18)
plt.ylabel('E',rotation='horizontal',labelpad=25,fontsize=18);
def quadterm(alpha,Omega,Eth,E,a):
return [np.sqrt((Eth-E)/np.sqrt(4*Omega*a)),-np.sqrt((Eth-E)/np.sqrt(4*Omega*a))]
# np.linspace(0,60e6,1000)
plt.plot(np.linspace(0,60e6,1000),quadterm(0.6e9,10**10,23663663.66366366,np.linspace(0,60e6,1000),6e4*(116-114.1))[0],'b')
plt.plot(np.linspace(0,60e6,1000),quadterm(0.6e9,10**10,23663663.66366366,np.linspace(0,60e6,1000),6e4*(116-114.1))[1],'r');
plt.figure(figsize=(11,8))
plt.scatter(Ecc(6e4*(116-114.1),10**4,9e7,np.linspace(7.5,17.5),10**10,0.6e9,10**4)[0],np.linspace(7.5,17.5),color='r',label='no expand')
plt.ylim(0,35)
plt.xlabel('E',fontsize=18)
plt.ylabel('c',rotation='horizontal',labelpad=25,fontsize=18);
Explanation: Define above equation for plotting
End of explanation
(fc+fp+fcp).diff(p)
Explanation: $\dfrac{\partial f}{\partial p} = 0$
End of explanation
solve((fc+fp+fcp).diff(p),c)[1]
Explanation: Solve for $c$
End of explanation
expand(simplify(pc.subs(c,solve((fc+fp+fcp).diff(p),c)[1])),p)
Explanation: Sub $c(p)$ into $p(c)$
End of explanation
simplify(solve(expand(simplify(pc.subs(c,solve((fc+fp+fcp).diff(p),c)[1])),p)-p,E)[1])
Explanation: Solve for $E(p)$
End of explanation
def Epp(a,b,q,alpha,beta,Omega,p):
return (2*p*(Omega*b+3*alpha*q+6*beta*q*p**2)+2*Omega*p*np.sqrt(3*Omega*q*p**2-3*a*q+b**2))/(3*q)
plt.figure(figsize=(11,8))
plt.scatter(Epp(6e4*(116-114.1),10**4,9e7,0.6e9,10**4,10**11,np.linspace(0,0.005))/10**7,np.linspace(0,0.005),color='r',label='T = 116')
plt.scatter(Epp(6e4*(110-114.1),10**4,9e7,0.6e9,10**4,10**11,np.linspace(0,0.005))/10**7,np.linspace(0,0.005),color='g',label='T = 110')
plt.scatter(Epp(6e4*(105-114.1),10**4,9e7,0.6e9,10**4,10**11,np.linspace(0,0.005))/10**7,np.linspace(0,0.005),color='b',label='T = 105')
plt.ylim(0,0.006),plt.xlim(0,20)
plt.xlabel('E',fontsize=18)
plt.ylabel('p',rotation='horizontal',labelpad=25,fontsize=18)
plt.legend(loc='upper right',fontsize=16);
def ppp(a,b,q,Omega,c):
return np.sqrt((a+2*b*c**2+3*q*c**4)/Omega)
plt.figure(figsize=(11,8))
plt.scatter(np.linspace(0,35),ppp(6e4*(116-114.1),10**4,9e7,10**10,np.linspace(0,35)))
plt.ylim(-2,220),plt.xlim(-1,36)
plt.xlabel('c',fontsize=18)
plt.ylabel('p',rotation='horizontal',labelpad=25,fontsize=18);
simplify((solve((fp+fcp).diff(p),E)[0]).subs(p,solve((fc+fp+fcp).diff(c),p)[1]))
def Ec(om, a, b, c, q, alp, beta):
return (2/om)*np.sqrt((a+2*b*c**2+3*q*c**4)/om)*(alp*om-(c*om)**2+2*beta*a+4*b*beta*c**2+6*beta*q*c**4)
expand((alpha-E/(2*p)+2*beta*p**2)**2)
plt.scatter(Ec(10**11,6e4,10**4,np.linspace(0,15,200),9e7,0.6e9,10**4),np.linspace(0,15,200),marker='s');
def Eth(T):
E = []
for i in T:
if i > 114.1:
E.append(0.64e6*(i-100)*np.sqrt(i-114.1))
else:
E.append(0)
return E
plt.plot(np.linspace(113,119,10),Eth(np.linspace(113,119,10)))
plt.xlim(113,122),plt.ylim(0,3e7);
Explanation: Define above equation for plotting
End of explanation |
13,923 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
LQ Approximation with QuantEcon.py
Step2: We consider a dynamic maximization problem with
reward function $f(s, x)$,
state transition function $g(s, x)$, and
discount rate $\delta$,
where $s$ and $x$ are the state and the control variables, respectively
(we follow Miranda-Fackler in notation).
Let $(s^, x^)$ denote the steady state state-control pair,
and write
$f^ = f(s^, x^)$, $f_i^ = f_i(s^, x^)$, $f_{ij}^ = f_{ij}(s^, x^)$,
$g^ = g(s^, x^)$, and $g_i^ = g_i(s^, x^*)$ for $i, j = s, x$.
First-order expansion of $g$ around $(s^, x^)$
Step3: Optimal Economic Growth
We consider the following optimal growth model from Miranda and Fackler, Section 9.7.1
Step4: Function definitions
Step5: Steady state
Step6: (s_star, x_star) satisfies the Euler equations
Step7: Construct $f^$, $\nabla f^$, $D^2 f^$, $g^$, and $\nabla g^*$
Step8: LQ Approximation
Generate an LQ instance that approximates our dynamic optimization problem
Step9: Solution by LQ.stationary_values
Solve the LQ problem
Step10: The optimal value function (of the LQ minimization problem)
Step11: The value at $s^*$
Step12: The optimal policy function
Step13: The optimal choice at $s^*$
Step14: Renewable Resource Management
Consider the renewable resource management model from Miranda and Fackler, Section 9.7.2 | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import quantecon as qe
# matplotlib settings
plt.rcParams['axes.xmargin'] = 0
plt.rcParams['axes.ymargin'] = 0
Explanation: LQ Approximation with QuantEcon.py
End of explanation
def approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount):
Return an approximating LQ instance.
Gradient of f: Df_star = np.array([f_s, f_x])
Hessian of f: DDf_star = np.array([[f_ss, f_sx], [f_sx, f_xx]])
Gradient of g: Dg_star = np.array([g_s, g_x])
n = 2
k = 1
sx_star = np.array([s_star, x_star])
# (1, s)' R (1, s) + 2 x N (1, s) + x Q x
Q = np.empty((k, k))
R = np.empty((n, n))
N = np.empty((k, n))
R[0, 0] = -(f_star - Df_star @ sx_star + (sx_star @ DDf_star @ sx_star) / 2)
R[1, 1], N[0, 1], N[0, 1], Q[0, 0] = -DDf_star.ravel() / 2
R[1, 0], N[0, 0] = -(Df_star - DDf_star @ sx_star).ravel() / 2
R[0, 1] = R[1, 0]
# A (1, s) + B x + C w
A = np.empty((n, n))
B = np.empty((n, k))
C = np.zeros((n, 1))
A[0, 0], A[0, 1], B[0, 0] = 1, 0, 0
A[1, 0] = g_star - Dg_star @ sx_star
A[1, 1], B[1, 0] = Dg_star.ravel()
lq = qe.LQ(Q, R, A, B, C, N, beta=discount)
return lq
Explanation: We consider a dynamic maximization problem with
reward function $f(s, x)$,
state transition function $g(s, x)$, and
discount rate $\delta$,
where $s$ and $x$ are the state and the control variables, respectively
(we follow Miranda-Fackler in notation).
Let $(s^, x^)$ denote the steady state state-control pair,
and write
$f^ = f(s^, x^)$, $f_i^ = f_i(s^, x^)$, $f_{ij}^ = f_{ij}(s^, x^)$,
$g^ = g(s^, x^)$, and $g_i^ = g_i(s^, x^*)$ for $i, j = s, x$.
First-order expansion of $g$ around $(s^, x^)$:
$$
\begin{align}
g(s, x)
&\approx g^ + g_s^ (s - s^) + g_x^ (x - x^) \
&= A \begin{pmatrix}1 \ s\end{pmatrix} + B x,
\end{align*}
$$
where
$A =
\begin{pmatrix}
1 & 0 \
g^ - \nabla g^{\mathrm{T}} z^ & g_s^
\end{pmatrix}$,
$B =
\begin{pmatrix}
0 \ g_x^*
\end{pmatrix}$
with $z^ = (s^, x^)^{\mathrm{T}}$ and $\nabla g^ = (g_s^, g_x^)^{\mathrm{T}}$.
Second-order expansion of $f$ around $(s^, x^)$:
$$
\begin{align}
f(s, x)
&\approx f^ + f_s^ (s - s^) + f_x^ (x - x^) +
\frac{1}{2} f_{ss}^ (s - s^)^2 + f_{sx}^ (s - s^) (x - x^) +
\frac{1}{2} f_{xx}^ (x - x^)^2 \
&= \begin{pmatrix}
1 & s & x
\end{pmatrix}
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} (\nabla f^ - D^2 f^ z^)^{\mathrm{T}} \
\frac{1}{2} (\nabla f^ - D^2 f^ z^) & \frac{1}{2} D^2 f^
\end{pmatrix}
\begin{pmatrix}
1 \ s \ x
\end{pmatrix},
\end{align}
$$
where
$\nabla f^ = (f_s^, f_x^)^{\mathrm{T}}$ and
$$
D^2 f^ =
\begin{pmatrix}
f_{ss}^ & f_{sx}^ \
f_{sx}^ & f_{xx}^*
\end{pmatrix}.
$$
Let
$$
\begin{align}
r(s, x)
&= -
\begin{pmatrix}
1 & s & x
\end{pmatrix}
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} (\nabla f^ - D^2 f^ z^)^{\mathrm{T}} \
\frac{1}{2} (\nabla f^ - D^2 f^ z^) & \frac{1}{2} D^2 f^
\end{pmatrix}
\begin{pmatrix}
1 \ s \ x
\end{pmatrix} \
&= \begin{pmatrix}
1 & s
\end{pmatrix}
R
\begin{pmatrix}
1 \ s
\end{pmatrix} +
2 x N
\begin{pmatrix}
1 \ s
\end{pmatrix} +
Q x,
\end{align*}
$$
where
$R = -
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} [f_s^ - (f_{ss}^ s^ + f_{sx}^ x^)] \
\frac{1}{2} [f_s^ - (f_{ss}^ s^ + f_{sx}^ x^)] & \frac{1}{2} f_{ss}^*
\end{pmatrix}$,
$N = -
\begin{pmatrix}
\frac{1}{2} [f_x^ - (f_{sx}^ s^ + f_{xx}^ x^)] & \frac{1}{2} f_{sx}^
\end{pmatrix}$.
$Q = -\frac{1}{2} f_{xx}^*$.
Remarks:
We are going to minimize the objective function.
End of explanation
alpha = 0.2
beta = 0.5
gamma = 0.9
discount = 0.9
Explanation: Optimal Economic Growth
We consider the following optimal growth model from Miranda and Fackler, Section 9.7.1:
$f(s, x) = \dfrac{(s - x)^{1-\alpha}}{1-\alpha}$,
$g(s, x) = \gamma + x^{\beta}$.
End of explanation
f = lambda s, x: (s - x)**(1 - alpha) / (1 - alpha)
f_s = lambda s, x: (s - x)**(-alpha)
f_x = lambda s, x: -f_s(s, x)
f_ss = lambda s, x: -alpha * (s - x)**(-alpha - 1)
f_sx = lambda s, x: -f_ss(s, x)
f_xx = lambda s, x: f_ss(s, x)
g = lambda s, x: gamma * x + x**beta
g_s = lambda s, x: 0
g_x = lambda s, x: gamma + beta * x**(beta - 1)
Explanation: Function definitions:
End of explanation
x_star = ((discount * beta) / (1 - discount * gamma))**(1 / (1 - beta))
s_star = gamma * x_star + x_star**beta
s_star, x_star
Explanation: Steady state:
End of explanation
f_x(s_star, x_star) + discount * f_s(g(s_star, x_star), x_star) * g_x(s_star, x_star)
Explanation: (s_star, x_star) satisfies the Euler equations:
End of explanation
f_star = f(s_star, x_star)
Df_star = np.array([f_s(s_star, x_star), f_x(s_star, x_star)])
DDf_star = np.array([[f_ss(s_star, x_star), f_sx(s_star, x_star)],
[f_sx(s_star, x_star), f_xx(s_star, x_star)]])
g_star = g(s_star, x_star)
Dg_star = np.array([g_s(s_star, x_star), g_x(s_star, x_star)])
Explanation: Construct $f^$, $\nabla f^$, $D^2 f^$, $g^$, and $\nabla g^*$:
End of explanation
lq = approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount)
Explanation: LQ Approximation
Generate an LQ instance that approximates our dynamic optimization problem:
End of explanation
P, F, d = lq.stationary_values()
P, F, d
Explanation: Solution by LQ.stationary_values
Solve the LQ problem:
End of explanation
V = lambda s: np.array([1, s]) @ P @ np.array([1, s]) + d
Explanation: The optimal value function (of the LQ minimization problem):
End of explanation
V(s_star)
-f_star / (1 - lq.beta)
Explanation: The value at $s^*$:
End of explanation
X = lambda s: -(F @ np.array([1, s]))[0]
Explanation: The optimal policy function:
End of explanation
X(s_star)
x_star
X = np.vectorize(X)
s_min, s_max = 5, 10
ss = np.linspace(s_min, s_max, 50)
title = "Optimal Investment Policy"
xlabel = "Wealth"
ylabel = "Investment (% of Wealth)"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, X(ss)/ss, label='L-Q')
ax.plot(s_star, x_star/s_star, '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.65, 0.9)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
Explanation: The optimal choice at $s^*$:
End of explanation
alpha = 4.0
beta = 1.0
gamma = 0.5
kappa = 0.2
discount = 0.9
f = lambda s, x: (s - x)**(1 - gamma) / (1 - gamma) - kappa * (s - x)
f_s = lambda s, x: (s - x)**(-gamma) - kappa
f_x = lambda s, x: -f_s(s, x)
f_ss = lambda s, x: -gamma * (s - x)**(-gamma - 1)
f_sx = lambda s, x: -f_ss(s, x)
f_xx = lambda s, x: f_ss(s, x)
g = lambda s, x: alpha * x - 0.5 * beta * x**2
g_s = lambda s, x: 0
g_x = lambda s, x: alpha - beta * x
x_star = (discount * alpha - 1) / (discount * beta)
s_star = (alpha**2 - 1/discount**2) / (2 * beta)
s_star, x_star
f_x(s_star, x_star) + discount * f_s(g(s_star, x_star), x_star) * g_x(s_star, x_star)
f_star = f(s_star, x_star)
Df_star = np.array([f_s(s_star, x_star), f_x(s_star, x_star)])
DDf_star = np.array([[f_ss(s_star, x_star), f_sx(s_star, x_star)],
[f_sx(s_star, x_star), f_xx(s_star, x_star)]])
g_star = g(s_star, x_star)
Dg_star = np.array([g_s(s_star, x_star), g_x(s_star, x_star)])
lq = approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount)
P, F, d = lq.stationary_values()
P, F, d
V = lambda s: np.array([1, s]) @ P @ np.array([1, s]) + d
V(s_star)
-f_star / (1 - lq.beta)
X = lambda s: -(F @ np.array([1, s]))[0]
X(s_star)
x_star
X = np.vectorize(X)
s_min, s_max = 6, 9
ss = np.linspace(s_min, s_max, 50)
harvest = ss - X(ss)
h_star = s_star - x_star
title = "Optimal Harvest Policy"
xlabel = "Available Stock"
ylabel = "Harvest (% of Stock)"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, harvest/ss, label='L-Q')
ax.plot(s_star, h_star/s_star, '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.5, 0.75)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
shadow_price = lambda s: -2 * (P @ [1, s])[1]
shadow_price = np.vectorize(shadow_price)
title = "Shadow Price Function"
ylabel = "Price"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, shadow_price(ss), label='L-Q')
ax.plot(s_star, shadow_price(s_star), '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.2, 0.4)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
Explanation: Renewable Resource Management
Consider the renewable resource management model from Miranda and Fackler, Section 9.7.2:
$f(s, x) = \dfrac{(s - x)^{1-\gamma}}{1-\gamma} - \kappa (s - x)$,
$g(s, x) = \alpha x - 0.5 \beta x^2$.
End of explanation |
13,924 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data info
Data notes
Wave I, the main survey, was fielded between February 21 and April 2, 2009. Wave 2 was fielded March 12, 2010 to June 8, 2010. Wave 3 was fielded March 22, 2011 to August 29, 2011. Wave 4 was fielded between March and November of 2013. Wave 5 was fielded between November, 2014 and March, 2015.
Step1: Load raw data
Step2: Select and rename columns
Step3: Distributions | Python Code:
import numpy as np
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
pd.options.display.max_columns=1000
Explanation: Data info
Data notes
Wave I, the main survey, was fielded between February 21 and April 2, 2009. Wave 2 was fielded March 12, 2010 to June 8, 2010. Wave 3 was fielded March 22, 2011 to August 29, 2011. Wave 4 was fielded between March and November of 2013. Wave 5 was fielded between November, 2014 and March, 2015.
End of explanation
df = pd.read_stata('/gh/data/hcmst/1.dta')
# df2 = pd.read_stata('/gh/data/hcmst/2.dta')
# df3 = pd.read_stata('/gh/data/hcmst/3.dta')
# df = df1.merge(df2, on='caseid_new')
# df = df.merge(df3, on='caseid_new')
df.head(2)
Explanation: Load raw data
End of explanation
rename_cols_dict = {'ppage': 'age', 'ppeducat': 'education',
'ppethm': 'race', 'ppgender': 'sex',
'pphouseholdsize': 'household_size', 'pphouse': 'house_type',
'hhinc': 'income', 'ppmarit': 'marital_status',
'ppmsacat': 'in_metro', 'ppreg4': 'usa_region',
'pprent': 'house_payment', 'children_in_hh': 'N_child',
'ppwork': 'work', 'ppnet': 'has_internet',
'papglb_friend': 'has_gay_friendsfam', 'pppartyid3': 'politics',
'papreligion': 'religion', 'qflag': 'in_relationship',
'q9': 'partner_age', 'duration': 'N_minutes_survey',
'glbstatus': 'is_lgb', 's1': 'is_married',
'partner_race': 'partner_race', 'q7b': 'partner_religion',
'q10': 'partner_education', 'US_raised': 'USA_raised',
'q17a': 'N_marriages', 'q17b': 'N_marriages2', 'coresident': 'cohabit',
'q21a': 'age_first_met', 'q21b': 'age_relationship_begin',
'q21d': 'age_married', 'q23': 'relative_income',
'q25': 'same_high_school', 'q26': 'same_college',
'q27': 'same_hometown', 'age_difference': 'age_difference',
'q34':'relationship_quality',
'q24_met_online': 'met_online', 'met_through_friends': 'met_friends',
'met_through_family': 'met_family', 'met_through_as_coworkers': 'met_work'}
df = df[list(rename_cols_dict.keys())]
df.rename(columns=rename_cols_dict, inplace=True)
# Process number of marriages
df['N_marriages'] = df['N_marriages'].astype(str).replace({'nan':''}) + df['N_marriages2'].astype(str).replace({'nan':''})
df.drop('N_marriages2', axis=1, inplace=True)
df['N_marriages'] = df['N_marriages'].replace({'':np.nan, 'once (this is my first marriage)': 'once', 'refused':np.nan})
df['N_marriages'] = df['N_marriages'].astype('category')
# Clean entries to make simpler
df['in_metro'] = df['in_metro']=='metro'
df['relationship_excellent'] = df['relationship_quality'] == 'excellent'
df['house_payment'].replace({'owned or being bought by you or someone in your household': 'owned',
'rented for cash': 'rent',
'occupied without payment of cash rent': 'free'}, inplace=True)
df['race'].replace({'white, non-hispanic': 'white',
'2+ races, non-hispanic': 'other, non-hispanic',
'black, non-hispanic': 'black'}, inplace=True)
df['house_type'].replace({'a one-family house detached from any other house': 'house',
'a building with 2 or more apartments': 'apartment',
'a one-family house attached to one or more houses': 'house',
'a mobile home': 'mobile',
'boat, rv, van, etc.': 'mobile'}, inplace=True)
df['is_not_working'] = df['work'].str.contains('not working')
df['has_internet'] = df['has_internet'] == 'yes'
df['has_gay_friends'] = np.logical_or(df['has_gay_friendsfam']=='yes, friends', df['has_gay_friendsfam']=='yes, both')
df['has_gay_family'] = np.logical_or(df['has_gay_friendsfam']=='yes, relatives', df['has_gay_friendsfam']=='yes, both')
df['religion_is_christian'] = df['religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)',
'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox'])
df['religion_is_none'] = df['religion'].isin(['none'])
df['in_relationship'] = df['in_relationship']=='partnered'
df['is_lgb'] = df['is_lgb']=='glb'
df['is_married'] = df['is_married']=='yes, i am married'
df['partner_race'].replace({'NH white': 'white', ' NH black': 'black',
' NH Asian Pac Islander':'other', ' NH Other': 'other', ' NH Amer Indian': 'other'}, inplace=True)
df['partner_religion_is_christian'] = df['partner_religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)',
'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox'])
df['partner_religion_is_none'] = df['partner_religion'].isin(['none'])
df['partner_education'] = df['partner_education'].map({'hs graduate or ged': 'high school',
'some college, no degree': 'some college',
"associate degree": "some college",
"bachelor's degree": "bachelor's degree or higher",
"master's degree": "bachelor's degree or higher",
"professional or doctorate degree": "bachelor's degree or higher"})
df['partner_education'].fillna('less than high school', inplace=True)
df['USA_raised'] = df['USA_raised']=='raised in US'
df['N_marriages'] = df['N_marriages'].map({'never married': '0', 'once': '1', 'twice': '2', 'three times': '3+', 'four or more times':'3+'})
df['relative_income'].replace({'i earned more': 'more', 'partner earned more': 'less',
'we earned about the same amount': 'same', 'refused': np.nan}, inplace=True)
df['same_high_school'] = df['same_high_school']=='same high school'
df['same_college'] = df['same_college']=='attended same college or university'
df['same_hometown'] = df['same_hometown']=='yes'
df['cohabit'] = df['cohabit']=='yes'
df['met_online'] = df['met_online']=='met online'
df['met_friends'] = df['met_friends']=='meet through friends'
df['met_family'] = df['met_family']=='met through family'
df['met_work'] = df['met_family']==1
df['age'] = df['age'].astype(int)
for c in df.columns:
if str(type(df[c])) == 'object':
df[c] = df[c].astype('category')
df.head()
df.to_csv('/gh/data/hcmst/1_cleaned.csv')
Explanation: Select and rename columns
End of explanation
for c in df.columns:
print(df[c].value_counts())
# Countplot if categorical; distplot if numeric
from pandas.api.types import is_numeric_dtype
plt.figure(figsize=(40,40))
for i, c in enumerate(df.columns):
plt.subplot(7,7,i+1)
if is_numeric_dtype(df[c]):
sns.distplot(df[c].dropna(), kde=False)
else:
sns.countplot(y=c, data=df)
plt.savefig('temp.png')
sns.barplot(x='income', y='race', data=df)
Explanation: Distributions
End of explanation |
13,925 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
WNx - 06 June 2017
Practical Deep Learning I Lesson 4 CodeAlong
Lesson4 JNB
Step1: Set up Data
We're working with the movielens data, which contains one rating per row, like this
Step2: Just for display purposes, let's read in the movie names too
Step3: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
Step4: This is the number of latent factors in each embedding.
Step5: Randomly split into training and validation.
Step6: Create subset for Excel
We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
Step7: Dot Product
The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works
Step8: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Bias
The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
Step9: This result is quite a bit better than the best benchmarks we could find with a quick Google search (at least the training loss is < 0.89) - so looks like a great approach!
Step10: e can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
Step11: Analyze Results
To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.
Step12: First, we'll look at the movie bias term. We create a 'model' - which in Keras is simply a way of associating one or more inputs with one or more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
Step13: Now we can look at the top and bottom rated movies. These ratings are correected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
Step14: Hey! I liked Avengers, lol
We can now do the same thing for the embeddings.
Step15: Because it's hard to interpret 50 embeddings, we use PCA (Principal Component Analysis) to simplify them down to just 3 vectors.
Step16: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
Step17: The 2nd is 'hollywood blockbuster'.
Step18: The 3rd is 'violent vs happy'.
Step19: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
Step20: Neural Net
Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net. | Python Code:
import theano
import sys, os
sys.path.insert(1, os.path.join('utils'))
%matplotlib inline
import utils; reload(utils)
from utils import *
from __future__ import print_function, division
path = "data/ml-latest-small/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
batch_size = 64
Explanation: WNx - 06 June 2017
Practical Deep Learning I Lesson 4 CodeAlong
Lesson4 JNB
End of explanation
ratings = pd.read_csv(path+'ratings.csv')
ratings.head()
len(ratings)
Explanation: Set up Data
We're working with the movielens data, which contains one rating per row, like this:
End of explanation
movie_names = pd.read_csv(path+'movies.csv').set_index('movieId')['title'].to_dict()
users = ratings.userId.unique()
movies = ratings.movieId.unique()
userid2idx = {o:i for i,o in enumerate(users)}
movieid2idx = {o:i for i,o in enumerate(movies)}
Explanation: Just for display purposes, let's read in the movie names too:
End of explanation
ratings.movieId = ratings.movieId.apply(lambda x: movieid2idx[x])
ratings.userId = ratings.userId.apply(lambda x: userid2idx[x])
user_min, user_max, movie_min, movie_max = (ratings.userId.min(),
ratings.userId.max(), ratings.movieId.min(), ratings.movieId.max())
user_min, user_max, movie_min, movie_max
n_users = ratings.userId.nunique()
n_movies = ratings.movieId.nunique()
n_users, n_movies
Explanation: We update the movie and user ids so that they are contiguous integers, which we want when using embeddings.
End of explanation
n_factors = 50
np.random.seed = 42
Explanation: This is the number of latent factors in each embedding.
End of explanation
msk = np.random.rand(len(ratings)) < 0.8
trn = ratings[msk]
val = ratings[~msk]
Explanation: Randomly split into training and validation.
End of explanation
g=ratings.groupby('userId')['rating'].count()
topUsers=g.sort_values(ascending=False)[:15]
g=ratings.groupby('movieId')['rating'].count()
topMovies=g.sort_values(ascending=False)[:15]
top_r = ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
Explanation: Create subset for Excel
We create a crosstab of the most popular movies and most movie-addicted users which we'll copy into Excel for creating a simple example. This isn't necessary for any of the modeling below however.
End of explanation
user_in = Input(shape=(1,), dtype='int64', name='user_in')
u = Embedding(n_users, n_factors, input_length=1, W_regularizer=l2(1e-4))(user_in)
movie_in = Input(shape=(1,), dtype='int64', name='movie_in')
m = Embedding(n_movies, n_factors, input_length=1, W_regularizer=l2(1e-4))(movie_in)
x = merge([u, m], mode='dot')
x = Flatten()(x)
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=3,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr=0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6,
validation_data=([val.userId, val.movieId], val.rating))
Explanation: Dot Product
The most basic model is a dot product of a movie embedding and a user embedding. Let's see how well that works:
End of explanation
def embedding_input(name, n_in, n_out, reg):
inp = Input(shape=(1,), dtype='int64', name=name)
return inp, Embedding(n_in, n_out, input_length=1, W_regularizer=l2(reg))(inp)
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
def create_bias(inp, n_in):
x = Embedding(n_in, 1, input_length=1)(inp)
return Flatten()(x)
ub = create_bias(user_in, n_users)
mb = create_bias(movie_in, n_movies)
x = merge([u, m], mode='dot')
x = Flatten()(x)
x = merge([x, ub], mode='sum')
x = merge([x, mb], mode='sum')
model = Model([user_in, movie_in], x)
model.compile(Adam(0.001), loss='mse')
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=1,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr = 0.01
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=6, verbose=0,
validation_data=([val.userId, val.movieId], val.rating))
model.optimizer.lr = 0.001
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=10, verbose=0,
validation_data=([val.userId, val.movieId], val.rating))
model.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=5,
validation_data=([val.userId, val.movieId], val.rating))
Explanation: The best benchmarks are a bit over 0.9, so this model doesn't seem to be working that well...
Bias
The problem is likely to be that we don't have bias terms - that is, a single bias for each user and each movie representing how positive or negative each user is, and how good each movie is. We can add that easily by simply creating an embedding with one output for each movie and each user, and adding it to our output.
End of explanation
model.save_weights(model_path + 'bias.h5')
model.load_weights(model_path + 'bias.h5')
Explanation: This result is quite a bit better than the best benchmarks we could find with a quick Google search (at least the training loss is < 0.89) - so looks like a great approach!
End of explanation
model.predict([np.array([3]), np.array([6])])
Explanation: e can use the model to generate predictions by passing a pair of ints - a user id and a movie id. For instance, this predicts that user #3 would really enjoy movie #6.
End of explanation
g = ratings.groupby('movieId')['rating'].count()
topMovies = g.sort_values(ascending=False)[:2000]
topMovies = np.array(topMovies.index)
Explanation: Analyze Results
To make the analysis of the factors more interesting, we'll restrict it to the top 2000 most popular movies.
End of explanation
get_movie_bias = Model(movie_in, mb)
movie_bias = get_movie_bias.predict(topMovies)
movie_ratings = [(b[0], movie_names[movies[i]]) for i, b in zip(topMovies, movie_bias)]
Explanation: First, we'll look at the movie bias term. We create a 'model' - which in Keras is simply a way of associating one or more inputs with one or more outputs, using the functional API. Here, our input is the movie id (a single id), and the output is the movie bias (a single float).
End of explanation
sorted(movie_ratings, key=itemgetter(0))[:15]
sorted(movie_ratings, key=itemgetter(0), reverse=True)[:15]
Explanation: Now we can look at the top and bottom rated movies. These ratings are correected for different levels of reviewer sentiment, as well as different types of movies that different reviewers watch.
End of explanation
get_movie_emb = Model(movie_in, m)
movie_emb = np.squeeze(get_movie_emb.predict([topMovies]))
movie_emb.shape
Explanation: Hey! I liked Avengers, lol
We can now do the same thing for the embeddings.
End of explanation
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
movie_pca = pca.fit(movie_emb.T).components_
fac0 = movie_pca[0]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac0, topMovies)]
Explanation: Because it's hard to interpret 50 embeddings, we use PCA (Principal Component Analysis) to simplify them down to just 3 vectors.
End of explanation
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac1 = movie_pca[1]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac1, topMovies)]
Explanation: Here's the 1st component. It seems to be 'critically acclaimed' or 'classic'.
End of explanation
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
fac2 = movie_pca[2]
movie_comp = [(f, movie_names[movies[i]]) for f,i in zip(fac2, topMovies)]
Explanation: The 2nd is 'hollywood blockbuster'.
End of explanation
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
Explanation: The 3rd is 'violent vs happy'.
End of explanation
import sys
stdout, stderr = sys.stdout, sys.stderr # save notebook stdout and stderr
reload(sys)
sys.setdefaultencoding('utf-8')
sys.stdout, sys.stderr = stdout, stderr # restore notebook stdout and stderr
start=50; end=100
X = fac0[start:end]
Y = fac2[start:end]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(topMovies[start:end], X, Y):
plt.text(x,y,movie_names[movies[i]], color=np.random.rand(3)*0.7, fontsize=14)
plt.show()
Explanation: We can draw a picture to see how various movies appear on the map of these components. This picture shows the 1st and 3rd components.
End of explanation
user_in, u = embedding_input('user_in', n_users, n_factors, 1e-4)
movie_in, m = embedding_input('movie_in', n_movies, n_factors, 1e-4)
x = merge([u,m], mode='concat')
x = Flatten()(x)
x = Dropout(0.3)(x)
x = Dense(70, activation='relu')(x)
x = Dropout(0.75)(x)
x = Dense(1)(x)
NN = Model([user_in, movie_in], x)
NN.compile(Adam(0.001), loss='mse')
NN.fit([trn.userId, trn.movieId], trn.rating, batch_size=64, nb_epoch=8,
validation_data=([val.userId, val.movieId], val.rating))
Explanation: Neural Net
Rather than creating a special purpose architecture (like our dot-product with bias earlier), it's often both easier and more accurate to use a standard neural network. Let's try it! Here, we simply concatenate the user and movie embeddings into a single vector, which we feed into the neural net.
End of explanation |
13,926 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deploying NVIDIA Triton Inference Server in AI Platform Prediction Custom Container (Google Cloud SDK)
In this notebook, we will walk through the process of deploying NVIDIA's Triton Inference Server into AI Platform Prediction Custom Container service in the Direct Model Server mode
Step1: Create the Artifact Registry
This will be used to store the container image for the model server Triton.
Step2: Prepare the container
We will make a copy of the Triton container image into the Artifact Registry, where AI Platform Custom Container Prediction will only pull from during Model Version setup. The following steps will download the NVIDIA Triton Inference Server container to your VM, then upload it to your repo.
Step3: Prepare model Artifacts
Clone the NVIDIA Triton Inference Server repo.
Step4: Create the GCS bucket where the model artifacts will be copied to.
Step5: Stage model artifacts and copy to bucket.
Step6: Prepare request payload
To prepare the payload format, we have included a utility get_request_body_simple.py. To use this utility, install the following library
Step7: Prepare non-binary request payload
The first model will illustrate a non-binary payload. The following command will create a KF Serving v2 format non-binary payload to be used with the "simple" model
Step8: Prepare binary request payload
Triton's implementation of KF Serving v2 protocol for binary data appends the binary data after the json body. Triton requires an additional header for offset
Step9: Create and deploy Model and Model Version
In this section, we will deploy two models
Step10: Create Model Version
After the Model is created, we can now create a Model Version under this Model. Each Model Version will need a name that is unique within the Model. In AI Platform Prediction Custom Container, a {Project}/{Model}/{ModelVersion} uniquely identifies the specific container and model artifact used for inference.
Step11: The following config file will be used in the Model Version creation command.
Command with YAML config file
Step12: To see details of the Model Version just created
Step13: To list all Model Versions and their states in this Model
Step14: Run prediction using curl
The "simple" model takes two tensors with shape [1,16] and does a couple of basic arithmetic operation.
Step15: Run prediction using Using requests library
Step16: ResNet-50 model (binary data)
Create Model
Step17: Create Model Version
Step18: Command with YAML config file
Step19: To see details of the Model Version just created
Step20: To list all Model Versions and their states in this Model
Step21: Run prediction using curl
Recall the offset value calcuated above. The binary case has an additional header
Step22: Run prediction using Using requests library
Step23: Clean up | Python Code:
PROJECT_ID='[Enter project name - REQUIRED]'
REPOSITORY='caipcustom'
REGION='us-central1'
TRITON_VERSION='20.06'
import os
import random
import requests
import json
MODEL_BUCKET='gs://{}-{}'.format(PROJECT_ID,random.randint(10000,99999))
ENDPOINT='https://{}-ml.googleapis.com/v1'.format(REGION)
TRITON_IMAGE='tritonserver:{}-py3'.format(TRITON_VERSION)
CAIP_IMAGE='{}-docker.pkg.dev/{}/{}/{}'.format(REGION,PROJECT_ID,REPOSITORY,TRITON_IMAGE)
'''
# Test values
PROJECT_ID='tsaikevin-1236'
REPOSITORY='caipcustom'
REGION='us-central1'
TRITON_VERSION='20.06'
import os
import random
import requests
import json
MODEL_BUCKET='gs://{}-{}'.format(PROJECT_ID,random.randint(10000,99999))
ENDPOINT='https://{}-ml.googleapis.com/v1'.format(REGION)
TRITON_IMAGE='tritonserver:{}-py3'.format(TRITON_VERSION)
CAIP_IMAGE='{}-docker.pkg.dev/{}/{}/{}'.format(REGION,PROJECT_ID,REPOSITORY,TRITON_IMAGE)
'''
!gcloud config set project $PROJECT_ID
Explanation: Deploying NVIDIA Triton Inference Server in AI Platform Prediction Custom Container (Google Cloud SDK)
In this notebook, we will walk through the process of deploying NVIDIA's Triton Inference Server into AI Platform Prediction Custom Container service in the Direct Model Server mode:
End of explanation
!gcloud beta artifacts repositories create $REPOSITORY --repository-format=docker --location=$REGION
!gcloud beta auth configure-docker $REGION-docker.pkg.dev --quiet
Explanation: Create the Artifact Registry
This will be used to store the container image for the model server Triton.
End of explanation
!docker pull nvcr.io/nvidia/$TRITON_IMAGE && \
docker tag nvcr.io/nvidia/$TRITON_IMAGE $CAIP_IMAGE && \
docker push $CAIP_IMAGE
Explanation: Prepare the container
We will make a copy of the Triton container image into the Artifact Registry, where AI Platform Custom Container Prediction will only pull from during Model Version setup. The following steps will download the NVIDIA Triton Inference Server container to your VM, then upload it to your repo.
End of explanation
!git clone -b r$TRITON_VERSION https://github.com/triton-inference-server/server.git
Explanation: Prepare model Artifacts
Clone the NVIDIA Triton Inference Server repo.
End of explanation
!gsutil mb $MODEL_BUCKET
Explanation: Create the GCS bucket where the model artifacts will be copied to.
End of explanation
!mkdir model_repository
!cp -R server/docs/examples/model_repository/* model_repository/
!./server/docs/examples/fetch_models.sh
!gsutil -m cp -R model_repository/ $MODEL_BUCKET
!gsutil ls -RLl $MODEL_BUCKET/model_repository
Explanation: Stage model artifacts and copy to bucket.
End of explanation
!pip3 install geventhttpclient
Explanation: Prepare request payload
To prepare the payload format, we have included a utility get_request_body_simple.py. To use this utility, install the following library:
End of explanation
!python3 get_request_body_simple.py -m simple
Explanation: Prepare non-binary request payload
The first model will illustrate a non-binary payload. The following command will create a KF Serving v2 format non-binary payload to be used with the "simple" model:
End of explanation
!python3 get_request_body_simple.py -m image -f server/qa/images/mug.jpg
Explanation: Prepare binary request payload
Triton's implementation of KF Serving v2 protocol for binary data appends the binary data after the json body. Triton requires an additional header for offset:
Inference-Header-Content-Length: [offset]
We have provided a script that will automatically resize the image to the proper size for ResNet-50 [224, 224, 3] and calculate the proper offset. The following command takes an image file and outputs the necessary data structure to be use with the "resnet50_netdef" model. Please note down this offset as it will be used later.
End of explanation
MODEL_NAME='simple'
!gcloud ai-platform models create $MODEL_NAME --region $REGION --enable-logging
!gcloud ai-platform models list --region $REGION
Explanation: Create and deploy Model and Model Version
In this section, we will deploy two models:
1. Simple model with non-binary data. KF Serving v2 protocol specifies a json format with non-binary data in the json body itself.
2. Binary data model with ResNet-50. Triton's implementation of binary data for KF Server v2 protocol.
Simple model (non-binary data)
Create Model
AI Platform Prediction uses a Model/Model Version Hierarchy, where the Model is a logical grouping of Model Versions. We will first create the Model.
Because the MODEL_NAME variable will be used later to specify the predict route, and Triton will use that route to run prediction on a specific model, we must set the value of this variable to a valid name of a model. For this section, will use the "simple" model.
End of explanation
VERSION_NAME='v01'
Explanation: Create Model Version
After the Model is created, we can now create a Model Version under this Model. Each Model Version will need a name that is unique within the Model. In AI Platform Prediction Custom Container, a {Project}/{Model}/{ModelVersion} uniquely identifies the specific container and model artifact used for inference.
End of explanation
import yaml
config_simple={'deploymentUri': MODEL_BUCKET+'/model_repository', \
'container': {'image': CAIP_IMAGE, \
'args': ['tritonserver', '--model-repository=$(AIP_STORAGE_URI)'], \
'env': [], \
'ports': {'containerPort': 8000}}, \
'routes': {'predict': '/v2/models/'+MODEL_NAME+'/infer', \
'health': '/v2/models/'+MODEL_NAME}, \
'machineType': 'n1-standard-4', 'autoScaling': {'minNodes': 1}}
with open(r'config_simple.yaml', 'w') as file:
config = yaml.dump(config_simple, file)
!gcloud beta ai-platform versions create $VERSION_NAME \
--model $MODEL_NAME \
--accelerator count=1,type=nvidia-tesla-t4 \
--config config_simple.yaml \
--region=$REGION \
--async
Explanation: The following config file will be used in the Model Version creation command.
Command with YAML config file
End of explanation
!gcloud ai-platform versions describe $VERSION_NAME --model=$MODEL_NAME --region=$REGION
Explanation: To see details of the Model Version just created
End of explanation
!gcloud ai-platform versions list --model=$MODEL_NAME --region=$REGION
Explanation: To list all Model Versions and their states in this Model
End of explanation
!curl -X POST $ENDPOINT/projects/$PROJECT_ID/models/$MODEL_NAME/versions/$VERSION_NAME:predict \
-k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
-d @simple.json
# this doesn't: gcloud auth application-default print-access-token
!curl -X POST $ENDPOINT/projects/$PROJECT_ID/models/$MODEL_NAME/versions/$VERSION_NAME:predict \
-k -H "Content-Type: application/json" \
-H "Authorization: Bearer `gcloud auth application-default print-access-token`" \
-d @simple.json
Explanation: Run prediction using curl
The "simple" model takes two tensors with shape [1,16] and does a couple of basic arithmetic operation.
End of explanation
with open('simple.json', 'r') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, MODEL_NAME, VERSION_NAME)
HEADERS = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
# this doesn't work: gcloud auth application-default print-access-token
with open('simple.json', 'r') as s:
data=s.read()
PREDICT_URL = "https://us-central1-ml.googleapis.com/v1/projects/tsaikevin-1236/models/simple/versions/v01:predict"
HEADERS = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth application-default print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
Explanation: Run prediction using Using requests library
End of explanation
BINARY_MODEL_NAME='resnet50_netdef'
!gcloud ai-platform models create $BINARY_MODEL_NAME --region $REGION --enable-logging
Explanation: ResNet-50 model (binary data)
Create Model
End of explanation
BINARY_VERSION_NAME='v01'
Explanation: Create Model Version
End of explanation
import yaml
config_binary={'deploymentUri': MODEL_BUCKET+'/model_repository', \
'container': {'image': CAIP_IMAGE, \
'args': ['tritonserver', '--model-repository=$(AIP_STORAGE_URI)'], \
'env': [], \
'ports': {'containerPort': 8000}}, \
'routes': {'predict': '/v2/models/'+BINARY_MODEL_NAME+'/infer', \
'health': '/v2/models/'+BINARY_MODEL_NAME}, \
'machineType': 'n1-standard-4', 'autoScaling': {'minNodes': 1}}
with open(r'config_binary.yaml', 'w') as file:
config_binary = yaml.dump(config_binary, file)
!gcloud beta ai-platform versions create $BINARY_VERSION_NAME \
--model $BINARY_MODEL_NAME \
--accelerator count=1,type=nvidia-tesla-t4 \
--config config_binary.yaml \
--region=$REGION \
--async
Explanation: Command with YAML config file
End of explanation
!gcloud ai-platform versions describe $BINARY_VERSION_NAME --model=$BINARY_MODEL_NAME --region=$REGION
Explanation: To see details of the Model Version just created
End of explanation
!gcloud ai-platform versions list --model=$BINARY_MODEL_NAME --region=$REGION
Explanation: To list all Model Versions and their states in this Model
End of explanation
!curl --request POST $ENDPOINT/projects/$PROJECT_ID/models/$BINARY_MODEL_NAME/versions/$BINARY_VERSION_NAME:predict \
-k -H "Content-Type: application/octet-stream" \
-H "Authorization: Bearer `gcloud auth print-access-token`" \
-H "Inference-Header-Content-Length: 138" \
--data-binary @payload.dat
# this doesn't work: gcloud auth application-default print-access-token
!curl --request POST $ENDPOINT/projects/$PROJECT_ID/models/$BINARY_MODEL_NAME/versions/$BINARY_VERSION_NAME:predict \
-k -H "Content-Type: application/octet-stream" \
-H "Authorization: Bearer `gcloud auth application-default print-access-token`" \
-H "Inference-Header-Content-Length: 138" \
--data-binary @payload.dat
Explanation: Run prediction using curl
Recall the offset value calcuated above. The binary case has an additional header:
Inference-Header-Content-Length: [offset]
End of explanation
with open('payload.dat', 'rb') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, BINARY_MODEL_NAME, BINARY_VERSION_NAME)
HEADERS = {
'Content-Type': 'application/octet-stream',
'Inference-Header-Content-Length': '138',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
# this doesn't work: gcloud auth application-default print-access-token
with open('payload.dat', 'rb') as s:
data=s.read()
PREDICT_URL = "{}/projects/{}/models/{}/versions/{}:predict".format(ENDPOINT, PROJECT_ID, BINARY_MODEL_NAME, BINARY_VERSION_NAME)
HEADERS = {
'Content-Type': 'application/octet-stream',
'Inference-Header-Content-Length': '138',
'Authorization': 'Bearer {}'.format(os.popen('gcloud auth application-default print-access-token').read().rstrip())
}
response = requests.request("POST", PREDICT_URL, headers=HEADERS, data = data).content.decode()
json.loads(response)
Explanation: Run prediction using Using requests library
End of explanation
!gcloud ai-platform versions delete $VERSION_NAME --model=$MODEL_NAME --region=$REGION --quiet
!gcloud ai-platform models delete $MODEL_NAME --region=$REGION --quiet
!gcloud ai-platform versions delete $BINARY_VERSION_NAME --model=$BINARY_MODEL_NAME --region=$REGION --quiet
!gcloud ai-platform models delete $BINARY_MODEL_NAME --region=$REGION --quiet
!gsutil -m rm -r -f $MODEL_BUCKET
!rm -rf model_repository triton-inference-server server *.yaml *.dat *.json
Explanation: Clean up
End of explanation |
13,927 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright (c) 2015, 2016
Sebastian Raschka
Li-Yi Wei
https
Step1: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see
Step2: Choosing a classification algorithm
There is no free lunch; different algorithms are suitable for different data and applications.
<img src="./images/01_09.png" width=100%>
First steps with scikit-learn
In the linear perceptron part, we wrote the models from ground up.
* too much coding
Existing machine learning libraries
* scikit-learn
* torch7, caffe, tensor-flow, theano, etc.
Scikit-learn
* will use for this course
* not as powerful as other deep learning libraries
* easier to use/install
* many library routines and data-sets to use, as exemplified below for main steps for a machine learning pipeline.
<a href="https
Step3: <img src = "./images/01_08.png">
Here, the third column represents the petal length, and the fourth column the petal width of the flower samples.
The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
Data sets
Step4: Data scaling
It is better to scale the data so that different features/channels have similar mean/std.
Step5: Training a perceptron via scikit-learn
We learned and coded perceptron in chapter 2.
Here we use the scikit-learn library version.
The perceptron only handles 2 classes for now.
We will discuss how to handle $N > 2$ classes.
<img src="./images/02_04.png" width=80%>
Step6: Training a perceptron model using the standardized training data
Step7: Modeling class probabilities via logistic regression
$\mathbf{x}$
Step8: Logistic regression intuition and conditional probabilities
$\phi(z) = \frac{1}{1 + e^{-z}}$
Step9: Relationship to cross entropy
Optimizing for logistic regression
From
$$
\frac{\partial \log\left(x\right)}{\partial x} = \frac{1}{x}
$$
and chain rule
Step10: Tackling overfitting via regularization
Recall our general representation of our modeling objective
Step11: Reading
PML Chapter 3
Maximum margin classification with support vector machines
Another popular type of machine learning algorithm
* basic version for linear classification
* kernel version for non-linear classification
Linear classification
decision boundary
$
\mathbf{w}^T \mathbf{x}
\begin{cases}
\geq 0 \; class +1 \
< 0 \; class -1
\end{cases}
$
similar to perceptron
based on different criteria
Perceptron
* minimize misclassification error
* more sensitive to outliers
* incremental learning (via SGD)
SVM
* maximize margins to nearest samples (called support vectors)
* more robust against outliers
* batch learning
<img src="./images/03_07.png" width=100%>
Maximum margin intuition
Maximize the margins of support vectors to the decision plane $\rightarrow$ more robust classification for future samples (that may lie close to the decision plane)
Let us start with the simple case of two classes with labels +1 and -1.
(We choose this particular combination of labeling for numerical simplicity, as follows.)
Let the training dataset be ${\mathbf{x}^{(i)}, y^{(i)}}$, $i=1$ to $N$.
The goal is to find hyper-plane parameters $\mathbf{w}$ and $w_0$ so that
$$y^{(i)} \left( \mathbf{w}^T\mathbf{x}^{(i)} + w_0\right) \geq 1, \; \forall i$$.
Note that $y^{(i)} = \pm1$ above.
<font color='blue'>
<ul>
<li> We use t or y for target labels depending on the context
<li> We separate out $w_0$ from the rest of
$
\mathbf{w} =
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$ for math derivation below
</ul>
</font>
Geometry perspective
For the purpose of optimization, we can cast the problem as maximize $\rho$ for
Step12: Alternative implementations in scikit-learn
Solving non-linear problems using a kernel SVM
SVM can be extended for non-linear classification
This is called kernel SVM
* will explain what kernel means
* and introduce kernel tricks
Step13: Using the kernel trick to find separating hyperplanes in higher dimensional space
For datasets that are not linearly separable, we can map them into a higher dimensional space and make them linearly separable.
Let $\phi$ be this mapping
Step14: This is the classification result using a rbf (radial basis function) kernel
Notice the non-linear decision boundaries
Step15: Types of kernels
A variety of kernel functions can be used.
The only requirement is that the kernel function behaves like inner product;
larger $K(\mathbf{x}, \mathbf{y})$ for more similar $\mathbf{x}$ and $\mathbf{y}$
Linear
$
K\left(\mathbf{x}, \mathbf{y}\right) = \mathbf{x}^T \mathbf{y}
$
Polynomials of degree $q$
$
K\left(\mathbf{x}, \mathbf{y}\right) =
(\mathbf{x}^T\mathbf{y} + 1)^q
$
Example for $d=2$ and $q=2$
$$
\begin{align}
K\left(\mathbf{x}, \mathbf{y}\right) &= \left( x_1y_1 + x_2y_2 + 1 \right)^2 \
&= 1 + 2x_1y_1 + 2x_2y_2 + 2x_1x_2y_1y_2 + x_1^2y_1^2 + x_2^2y_2^2
\end{align}
$$
, which corresponds to the following kernel function
Step16: Reading
PML Chapter 3
IML Chapter 13-1 to 13.7
The kernel trick
Decision tree learning
Machine learning can be like black box/magic; the model/method works after tuning parameters and such, but how and why?
Decision tree shows you how it makes decision, e.g. classification.
Example decision tree
analogous to flow charts for designing algorithms
every internal node can be based on some if-statement
automatically learned from data, not manually programmed by human
<img src="./images/03_15.png" width=80%>
Decision tree learning
Start with a single node that contains all data
Select a node and split it via some criterion to optimize some objective, usually information/impurity $I$
Repeat until convergence
Step17: Building a decision tree
A finite number of choices for split
Split only alone boundaries of different classes
Exactly where? Maxmize margins
Step18: Visualize the decision tree
Step19: Install Graphviz
<!--
<img src="./images/03_18.png" width=80%>
-->
dot -Tsvg tree.dot -o tree.svg
<img src="./images/tree.svg" width=80%>
Note
If you have scikit-learn 0.18 and pydotplus installed (e.g., you can install it via pip install pydotplus), you can also show the decision tree directly without creating a separate dot file as shown below. Also note that sklearn 0.18 offers a few additional options to make the decision tree visually more appealing.
Step20: Decisions trees and SVM
SVM considers only margins to nearest samples to the decision boundary
Decision tree considers all samples
Case studies
Pruning a decision tree
Split until all leaf nodes are pure?
not always a good idea due to potential over-fitting
Simplify the tree via pruning
Pre-pruning
* stop splitting a node if the contained data size is below some threshold (e.g. 5% of all data)
Post-pruning
* build a tree first, and remove excessive branches
* reserve a pruning subset separate from the training data
* for each sub-tree (top-down or bottom-up), replace it with a leaf node labeled with the majority vote if not worsen performance for the pruning subset
Pre-pruning is simpler, post-pruning works better
Combining weak to strong learners via random forests
Forest = collection of trees
An example of ensemble learning (more about this later)
* combine multiple weak learners to build a strong learner
* better generalization, less overfitting
Less interpretable than a single tree
Random forest algorithm
Decide how many trees to build
To train each tree
Step21: Reading
PML Chapter 3
IML Chapter 9
Parametric versus non-parametric models
(fixed) number of parameters trained and retained
amount of data retained
trade-off between training and evaluation time
Example
Linear classifiers (SVM, perceptron)
* parameters
Step22: Too small k can cause overfitting (high variance).
Step23: Too large k can cause under-fitting (high bias).
How about using different $p$ values for Minkowski distance? | Python Code:
%load_ext watermark
%watermark -a '' -u -d -v -p numpy,pandas,matplotlib,sklearn
Explanation: Copyright (c) 2015, 2016
Sebastian Raschka
Li-Yi Wei
https://github.com/1iyiwei/pyml
MIT License
Python Machine Learning - Code Examples
Chapter 3 - A Tour of Machine Learning Classifiers
Logistic regression
Binary and multiple classes
Support vector machine
Kernel trick
Decision tree
Random forest for ensemble learning
K nearest neighbors
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
End of explanation
from IPython.display import Image
%matplotlib inline
# Added version check for recent scikit-learn 0.18 checks
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
Explanation: The use of watermark is optional. You can install this IPython extension via "pip install watermark". For more information, please see: https://github.com/rasbt/watermark.
Overview
Choosing a classification algorithm
First steps with scikit-learn
Training a perceptron via scikit-learn
Modeling class probabilities via logistic regression
Logistic regression intuition and conditional probabilities
Learning the weights of the logistic cost function
Handling multiple classes
Training a logistic regression model with scikit-learn
Tackling overfitting via regularization
Maximum margin classification with support vector machines
Maximum margin intuition
Dealing with the nonlinearly separable case using slack variables
Alternative implementations in scikit-learn
Solving nonlinear problems using a kernel SVM
Using the kernel trick to find separating hyperplanes in higher dimensional space
Decision tree learning
Maximizing information gain – getting the most bang for the buck
Building a decision tree
Combining weak to strong learners via random forests
K-nearest neighbors – a lazy learning algorithm
Summary
End of explanation
from sklearn import datasets
import numpy as np
iris = datasets.load_iris()
print('Data set size: ' + str(iris.data.shape))
X = iris.data[:, [2, 3]]
y = iris.target
print('Class labels:', np.unique(y))
import pandas as pd
df = pd.DataFrame(iris.data)
df.tail()
Explanation: Choosing a classification algorithm
There is no free lunch; different algorithms are suitable for different data and applications.
<img src="./images/01_09.png" width=100%>
First steps with scikit-learn
In the linear perceptron part, we wrote the models from ground up.
* too much coding
Existing machine learning libraries
* scikit-learn
* torch7, caffe, tensor-flow, theano, etc.
Scikit-learn
* will use for this course
* not as powerful as other deep learning libraries
* easier to use/install
* many library routines and data-sets to use, as exemplified below for main steps for a machine learning pipeline.
<a href="https://en.wikipedia.org/wiki/Iris_flower_data_set">Iris dataset</a>
Let's use this dataset for comparing machine learning methods
<table style="width:100% border=0">
<tr>
<td>
<img src ="https://upload.wikimedia.org/wikipedia/commons/thumb/5/56/Kosaciec_szczecinkowaty_Iris_setosa.jpg/220px-Kosaciec_szczecinkowaty_Iris_setosa.jpg">
</td>
<td>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/4/41/Iris_versicolor_3.jpg/220px-Iris_versicolor_3.jpg">
</td>
<td>
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/9/9f/Iris_virginica.jpg/220px-Iris_virginica.jpg">
</td>
</tr>
<tr style="text-align=center">
<td>
Setosa
</td>
<td>
Versicolor
</td>
<td>
Virginica
</td>
</tr>
</table>
Loading the Iris dataset from scikit-learn.
End of explanation
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
# splitting data into 70% training and 30% test data:
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
num_training = y_train.shape[0]
num_test = y_test.shape[0]
print('training: ' + str(num_training) + ', test: ' + str(num_test))
Explanation: <img src = "./images/01_08.png">
Here, the third column represents the petal length, and the fourth column the petal width of the flower samples.
The classes are already converted to integer labels where 0=Iris-Setosa, 1=Iris-Versicolor, 2=Iris-Virginica.
Data sets: training versus test
Use different data sets for training and testing a model (generalization)
End of explanation
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
Explanation: Data scaling
It is better to scale the data so that different features/channels have similar mean/std.
End of explanation
from sklearn.linear_model import Perceptron
ppn = Perceptron(n_iter=40, eta0=0.1, random_state=0)
_ = ppn.fit(X_train_std, y_train)
y_pred = ppn.predict(X_test_std)
print('Misclassified samples: %d out of %d' % ((y_test != y_pred).sum(), y_test.shape[0]))
from sklearn.metrics import accuracy_score
print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))
from matplotlib.colors import ListedColormap
import matplotlib.pyplot as plt
import warnings
def versiontuple(v):
return tuple(map(int, (v.split("."))))
def plot_decision_regions(X, y, classifier, test_idx=None,
resolution=0.02, xlabel='', ylabel='', title=''):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1],
alpha=0.8, c=cmap(idx),
marker=markers[idx], label=cl)
# highlight test samples
if test_idx:
# plot all samples
if not versiontuple(np.__version__) >= versiontuple('1.9.0'):
X_test, y_test = X[list(test_idx), :], y[list(test_idx)]
warnings.warn('Please update to NumPy 1.9.0 or newer')
else:
X_test, y_test = X[test_idx, :], y[test_idx]
plt.scatter(X_test[:, 0],
X_test[:, 1],
c='',
alpha=1.0,
linewidths=1,
marker='o',
s=55, label='test set')
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
Explanation: Training a perceptron via scikit-learn
We learned and coded perceptron in chapter 2.
Here we use the scikit-learn library version.
The perceptron only handles 2 classes for now.
We will discuss how to handle $N > 2$ classes.
<img src="./images/02_04.png" width=80%>
End of explanation
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
test_idx = range(X_train_std.shape[0], X_combined_std.shape[0])
plot_decision_regions(X=X_combined_std, y=y_combined,
classifier=ppn, test_idx=test_idx,
xlabel='petal length [standardized]',
ylabel='petal width [standardized]')
Explanation: Training a perceptron model using the standardized training data:
End of explanation
import matplotlib.pyplot as plt
import numpy as np
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
z = np.arange(-7, 7, 0.1)
phi_z = sigmoid(z)
plt.plot(z, phi_z)
plt.axvline(0.0, color='k')
plt.ylim(-0.1, 1.1)
plt.xlabel('z')
plt.ylabel('$\phi (z)$')
# y axis ticks and gridline
plt.yticks([0.0, 0.5, 1.0])
ax = plt.gca()
ax.yaxis.grid(True)
plt.tight_layout()
# plt.savefig('./figures/sigmoid.png', dpi=300)
plt.show()
Explanation: Modeling class probabilities via logistic regression
$\mathbf{x}$: input
$\mathbf{w}$: weights
$z = \mathbf{w}^T \mathbf{x}$
$\phi(z)$: transfer function
$y$: predicted class
Perceptron
$
y = \phi(z) =
\begin{cases}
1 \; z \geq 0 \
-1 \; z < 0
\end{cases}
$
Adaline
$
\begin{align}
\phi(z) &= z \
y &=
\begin{cases}
1 \; \phi(z) \geq 0 \
-1 \; \phi(z) < 0
\end{cases}
\end{align}
$
Logistic regression
$
\begin{align}
\phi(z) &= \frac{1}{1 + e^{-z}} \
y &=
\begin{cases}
1 \; \phi(z) \geq 0.5 \
0 \; \phi(z) < 0.5
\end{cases}
\end{align}
$
Note: this is actually classification (discrete output) not regression (continuous output); the naming is historical.
End of explanation
def cost_1(z):
return - np.log(sigmoid(z))
def cost_0(z):
return - np.log(1 - sigmoid(z))
z = np.arange(-10, 10, 0.1)
phi_z = sigmoid(z)
c1 = [cost_1(x) for x in z]
plt.plot(phi_z, c1, label='J(w) if t=1')
c0 = [cost_0(x) for x in z]
plt.plot(phi_z, c0, linestyle='--', label='J(w) if t=0')
plt.ylim(0.0, 5.1)
plt.xlim([0, 1])
plt.xlabel('$\phi$(z)')
plt.ylabel('J(w)')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/log_cost.png', dpi=300)
plt.show()
Explanation: Logistic regression intuition and conditional probabilities
$\phi(z) = \frac{1}{1 + e^{-z}}$: sigmoid function
$\phi(z) \in [0, 1]$, so can be interpreted as probability: $P(y = 1 \; | \; \mathbf{x} ; \mathbf{w}) = \phi(\mathbf{w}^T \mathbf{x})$
We can then choose class by interpreting the probability:
$
\begin{align}
y &=
\begin{cases}
1 \; \phi(z) \geq 0.5 \
0 \; \phi(z) < 0.5
\end{cases}
\end{align}
$
The probability information can be very useful for many applications
* knowing the confidence of a prediction in addition to the prediction itself
* e.g. weather forecast: tomorrow might rain versus tomorrow might rain with 70% chance
Perceptron:
<img src="./images/02_04.png" width=100%>
Adaline:
<img src="./images/02_09.png" width=100%>
Logistic regression:
<img src="./images/03_03.png" width=100%>
Learning the weights of the logistic cost function
$J(\mathbf{w})$: cost function to minimize with parameters $\mathbf{w}$
$z = \mathbf{w}^T \mathbf{x}$
For Adaline, we minimize sum-of-squared-error:
$$
J(\mathbf{w}) = \frac{1}{2} \sum_i \left( y^{(i)} - t^{(i)}\right)^2
= \frac{1}{2} \sum_i \left( \phi\left(z^{(i)}\right) - t^{(i)}\right)^2
$$
Maximum likelihood estimation (MLE)
For logistic regression, we take advantage of the probability interpretation to maximize the likelihood:
$$
L(\mathbf{w}) = P(t \; | \; \mathbf{x}; \mathbf{w}) = \prod_i P\left( t^{(i)} \; | \; \mathbf{x}^{(i)} ; \mathbf{w} \right) = \prod_i \phi\left(z^{(i)}\right)^{t^{(i)}} \left(1 - \phi\left(z^{(i)}\right)\right)^{1-t^{(i)}}
$$
Why?
$$
\begin{align}
\phi\left(z^{(i)}\right)^{t^{(i)}} \left(1 - \phi\left(z^{(i)}\right)\right)^{1-t^{(i)}} =
\begin{cases}
\phi\left(z^{(i)} \right) & \; if \; t^{(i)} = 1 \
1 - \phi\left(z^{(i)}\right) & \; if \; t^{(i)} = 0
\end{cases}
\end{align}
$$
This is equivalent to minimize the negative log likelihood:
$$
J(\mathbf{w})
= -\log L(\mathbf{w})
= \sum_i -t^{(i)}\log\left(\phi\left(z^{(i)}\right)\right) - \left(1 - t^{(i)}\right) \log\left(1 - \phi\left(z^{(i)}\right) \right)
$$
Converting prod to sum via log() is a common math trick for easier computation.
End of explanation
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1000.0, random_state=0)
lr.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=lr, test_idx=test_idx,
xlabel = 'petal length [standardized]',
ylabel='petal width [standardized]')
if Version(sklearn_version) < '0.17':
print(lr.predict_proba(X_test_std[0, :]))
else:
print(lr.predict_proba(X_test_std[0, :].reshape(1, -1)))
Explanation: Relationship to cross entropy
Optimizing for logistic regression
From
$$
\frac{\partial \log\left(x\right)}{\partial x} = \frac{1}{x}
$$
and chain rule:
$$
\frac{\partial f\left(y\left(x\right)\right)}{\partial x} = \frac{\partial f(y)}{\partial y} \frac{\partial y}{\partial x}
$$
We know
$$
J(\mathbf{w})
= \sum_i -t^{(i)}\log\left(\phi\left(z^{(i)}\right)\right) - \left(1 - t^{(i)}\right) \log\left(1 - \phi\left(z^{(i)}\right) \right)
$$
$$
\frac{\partial J(\mathbf{w})}{\partial \mathbf{w}} =
\sum_i \left(\frac{-t^{(i)}}{\phi\left(z^{(i)}\right)} + \frac{1- t^{(i)}}{1 - \phi\left(z^{(i)}\right)} \right) \frac{\partial \phi \left(z^{(i)}\right)}{\partial \mathbf{w}}
$$
For sigmoid
$
\frac{\partial \phi(z)}{\partial z} = \phi(z)\left(1-\phi(z)\right)
$
Thus
$$
\begin{align}
\delta J =
\frac{\partial J(\mathbf{w})}{\partial \mathbf{w}} &=
\sum_i \left(\frac{-t^{(i)}}{\phi\left(z^{(i)}\right)} + \frac{1- t^{(i)}}{1 - \phi\left(z^{(i)}\right)} \right)
\phi\left(z^{(i)}\right)\left(1 - \phi\left(z^{(i)}\right) \right)
\frac{\partial z^{(i)}}{\partial \mathbf{w}} \
&=
\sum_i \left( -t^{(i)}\left(1 - \phi\left(z^{(i)}\right)\right) + \left(1-t^{(i)}\right)\phi\left(z^{(i)}\right) \right) \mathbf{x}^{(i)} \
&=
\sum_i \left( -t^{(i)} + \phi\left( z^{(i)} \right) \right) \mathbf{x}^{(i)}
\end{align}
$$
For gradient descent
$$
\begin{align}
\delta \mathbf{w} &= -\eta \delta J = \eta \sum_i \left( t^{(i)} - \phi\left( z^{(i)} \right) \right) \mathbf{x}^{(i)} \
\mathbf{w} & \leftarrow \mathbf{w} + \delta \mathbf{w}
\end{align}
$$
as related to what we did for optimizing in chapter 2.
Handling multiple classes
So far we have discussed only binary classifiers for 2 classes.
How about $K > 2$ classes, e.g. $K=3$ for the Iris dataset?
Multiple binary classifiers
One versus one
Build $\frac{K(K-1)}{2}$ classifiers,
each separating a different pair of classes $C_i$ and $C_j$
<a href="https://www.microsoft.com/en-us/research/people/cmbishop/" title="Figure 4.2(b), PRML, Bishop">
<img src="./images/one_vs_one.svg">
</a>
The green region is ambiguous: $C_1$, $C_2$, $C_3$
One versus rest (aka one versus all)
Build $K$ binary classifiers,
each separating class $C_k$ from the rest
<a href="https://www.microsoft.com/en-us/research/people/cmbishop/" title="Figure 4.2(a), PRML, Bishop">
<img src="./images/one_vs_rest.svg">
</a>
The green region is ambiguous: $C_1$, $C_2$
Ambiguity
Both one-versus-one and one-versus-all have ambiguous regions and may incur more complexity/computation.
Ambiguity can be resolved via tie breakers, e.g.:
* activation values
* majority voting
* more details: http://scikit-learn.org/stable/modules/multiclass.html
One multi-class classifier
Multiple activation functions $\phi_k, k=1, 2, ... K$ each with different parameters
$$
\phi_k\left(\mathbf{x}\right) = \phi\left(\mathbf{x}, \mathbf{w_k}\right) = \phi\left(\mathbf{w}_k^T \mathbf{x} \right)
$$
We can then choose the class based on maximum activation:
$$
y = argmax_k \; \phi_k\left( \mathbf{x} \right)
$$
Can also apply the above for multiple binary classifiers
Caveat
* need to assume the individual classifiers have compatible activations
* https://en.wikipedia.org/wiki/Multiclass_classification
Binary logistic regression:
$$
J(\mathbf{w})
= \sum_{i \in samples} -t^{(i)}\log\left(\phi\left(z^{(i)}\right)\right) - \left(1 - t^{(i)}\right) \log\left(1 - \phi\left(z^{(i)}\right) \right)
$$
Multi-class logistic regression:
$$
J(\mathbf{w})
= \sum_{i \in samples} \sum_{j \in classes} -t^{(i, j)}\log\left(\phi\left(z^{(i, j)}\right)\right) - \left(1 - t^{(i, j)}\right) \log\left(1 - \phi\left(z^{(i, j)}\right) \right)
$$
For $\phi \geq 0$, we can normalize for probabilistic interpretation:
$$
P\left(k \; | \; \mathbf{x} ; {\mathbf{w}k} \right) =
\frac{\phi_k\left(\mathbf{x}\right)}{\sum{m=1}^K \phi_m\left(\mathbf{x}\right) }
$$
Or use softmax (normalized exponential) for any activation $\phi$:
$$
P\left(k \; | \; \mathbf{x} ; {\mathbf{w}k} \right) =
\frac{e^{\phi_k\left(\mathbf{x}\right)}}{\sum{m=1}^K e^{\phi_m\left(\mathbf{x}\right)} }
$$
For example, if $\phi(z) = z$:
$$
P\left(k \; | \; \mathbf{x} ; {\mathbf{w}k} \right) =
\frac{e^{\mathbf{w}_k^T\mathbf{x}}}{\sum{m=1}^K e^{\mathbf{w}_m^T \mathbf{x}} }
$$
For training, the model can be optimized via gradient descent.
The likelihood function (to maximize):
$$
L(\mathbf{w})
= P(t \; | \; \mathbf{x}; \mathbf{w})
= \prod_i P\left( t^{(i)} \; | \; \mathbf{x}^{(i)} ; \mathbf{w} \right)
$$
The loss function (to minimize):
$$
J(\mathbf{w})
= -\log{L(\mathbf{w})}
= -\sum_i \log{P\left( t^{(i)} \; | \; \mathbf{x}^{(i)} ; \mathbf{w} \right)}
$$
Training a logistic regression model with scikit-learn
The code is quite simple.
End of explanation
weights, params = [], []
for c in np.arange(-5, 5):
lr = LogisticRegression(C=10**c, random_state=0)
lr.fit(X_train_std, y_train)
# coef_ has shape (n_classes, n_features)
# we visualize only class 1
weights.append(lr.coef_[1])
params.append(10**c)
weights = np.array(weights)
plt.plot(params, weights[:, 0],
label='petal length')
plt.plot(params, weights[:, 1], linestyle='--',
label='petal width')
plt.ylabel('weight coefficient')
plt.xlabel('C')
plt.legend(loc='upper left')
plt.xscale('log')
# plt.savefig('./figures/regression_path.png', dpi=300)
plt.show()
Explanation: Tackling overfitting via regularization
Recall our general representation of our modeling objective:
$$\Phi(\mathbf{X}, \mathbf{T}, \Theta) = L\left(\mathbf{X}, \mathbf{T}, \mathbf{Y}=f(\mathbf{X}, \Theta)\right) + P(\Theta)$$
* $L$ - loss/objective for data fitting
* $P$ - regularization to favor simple model
Need to balance between accuracy/bias (L) and complexity/variance (P)
If the model is too simple, it might be inaccurate (high bias)
If the model is too complex, it might over-fit and over-sensitive to training data (high variance)
A well-trained model should
* fit the training data well (low bias)
* remain stable with different training data for good generalization (to unseen future data; low variance)
The following illustrates bias and variance for a potentially non-linear model
<img src="./images/03_06.png" width=100%>
$L_2$ norm is a common form for regularization, e.g.
$
P = \lambda ||\mathbf{w}||^2
$
for the linear weights $\mathbf{w}$
$\lambda$ is a parameter to weigh between bias and variance
$C = \frac{1}{\lambda}$ for scikit-learn
End of explanation
from sklearn.svm import SVC
svm = SVC(kernel='linear', C=1.0, random_state=0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
Explanation: Reading
PML Chapter 3
Maximum margin classification with support vector machines
Another popular type of machine learning algorithm
* basic version for linear classification
* kernel version for non-linear classification
Linear classification
decision boundary
$
\mathbf{w}^T \mathbf{x}
\begin{cases}
\geq 0 \; class +1 \
< 0 \; class -1
\end{cases}
$
similar to perceptron
based on different criteria
Perceptron
* minimize misclassification error
* more sensitive to outliers
* incremental learning (via SGD)
SVM
* maximize margins to nearest samples (called support vectors)
* more robust against outliers
* batch learning
<img src="./images/03_07.png" width=100%>
Maximum margin intuition
Maximize the margins of support vectors to the decision plane $\rightarrow$ more robust classification for future samples (that may lie close to the decision plane)
Let us start with the simple case of two classes with labels +1 and -1.
(We choose this particular combination of labeling for numerical simplicity, as follows.)
Let the training dataset be ${\mathbf{x}^{(i)}, y^{(i)}}$, $i=1$ to $N$.
The goal is to find hyper-plane parameters $\mathbf{w}$ and $w_0$ so that
$$y^{(i)} \left( \mathbf{w}^T\mathbf{x}^{(i)} + w_0\right) \geq 1, \; \forall i$$.
Note that $y^{(i)} = \pm1$ above.
<font color='blue'>
<ul>
<li> We use t or y for target labels depending on the context
<li> We separate out $w_0$ from the rest of
$
\mathbf{w} =
\begin{bmatrix}
w_1 \\
w_2 \\
\vdots \\
w_n
\end{bmatrix}
$ for math derivation below
</ul>
</font>
Geometry perspective
For the purpose of optimization, we can cast the problem as maximize $\rho$ for:
$$\frac{y^{(i)} \left( \mathbf{w}^T\mathbf{x}^{(i)} + w_0\right)}{||\mathbf{w}||} \geq \rho, \; \forall i$$
; note that the left-hand side can be interpreted as the distance from $\mathbf{x}^{(i)}$ to the hyper-plane.
Scaling
Note that the above equation remains invariant if we multiply $||\mathbf{w}||$ and $w_0$ by any non-zero scalar.
To eliminate this ambiguity, we can fix $\rho ||\mathbf{w}|| = 1$ and minimize $||\mathbf{w}||$, i.e.:
min $\frac{1}{2} ||\mathbf{w}||^2$ subject to $y^{(i)}\left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) \geq 1, \; \forall i$
Optimization
We can use <a href="https://en.wikipedia.org/wiki/Lagrange_multiplier">Lagrangian multipliers</a> $\alpha^{(i)}$ for this constrained optimization problem:
$$
\begin{align}
L(\mathbf{w}, w_0, \alpha)
&=
\frac{1}{2} ||\mathbf{w}||^2 - \sum_i \alpha^{(i)} \left( y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) -1 \right)
\
&=
\frac{1}{2} ||\mathbf{w}||^2 - \sum_i \alpha^{(i)} y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) + \sum_i \alpha^{(i)}
\end{align}
$$
<!--
(The last term above is for $\alpha^{(i)} \geq 0$.)
-->
With some calculus/algebraic manipulations:
$$\frac{\partial L}{\partial \mathbf{w}} = 0 \Rightarrow \mathbf{w} = \sum_i \alpha^{(i)} y^{(i)} \mathbf{x}^{(i)}$$
$$\frac{\partial L}{\partial w_0} = 0 \Rightarrow \sum_i \alpha^{(i)} y^{(i)} = 0$$
Plug the above two into $L$ above, we have:
$$
\begin{align}
L(\mathbf{w}, w_0, \alpha) &= \frac{1}{2} \mathbf{w}^T \mathbf{w} - \mathbf{w}^T \sum_i \alpha^{(i)}y^{(i)}\mathbf{x}^{(i)} - w_0 \sum_i \alpha^{(i)} y^{(i)} + \sum_i \alpha^{(i)} \
&= -\frac{1}{2} \mathbf{w}^T \mathbf{w} + \sum_i \alpha^{(i)} \
&= -\frac{1}{2} \sum_i \sum_j \alpha^{(i)} \alpha^{(j)} y^{(i)} y^{(j)} \left( \mathbf{x}^{(i)}\right)^T \mathbf{x}^{(j)} + \sum_i \alpha^{(i)}
\end{align}
$$
, which can be maximized, via quadratic optimization with $\alpha^{(i)}$ only, subject to the constraints: $\sum_i \alpha^{(i)} y^{(i)} = 0$ and $\alpha^{(i)} \geq 0, \; \forall i$
Note that $y^{(i)} = \pm 1$.
Once we solve ${ \alpha^{(i)} }$ we will see that most of them are $0$ with a few $> 0$.
The $>0$ ones correspond to lie on the decision boundaries and thus called support vectors:
$$y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) = 1$$
from which we can calculate $w_0$.
Dealing with the nonlinearly separable case using slack variables
Soft margin classification
Some datasets are not linearly separable
Avoid thin margins for linearly separable cases
* bias variance tradeoff
For datasets that are not linearly separable, we can introduce slack variables ${\xi^{(i)}}$ as follows:
$$y^{(i)} \left( \mathbf{w}^T \mathbf{x}^{(i)} + w_0\right) \geq 1 - \xi^{(i)}, \; \forall i$$
If $\xi^{(i)} = 0$, it is just like the original case without slack variables.
If $0 < \xi^{(i)} <1$, $\mathbf{x}^{(i)}$ is correctly classified but lies within the margin.
If $\xi^{(i)} \geq 1$, $\mathbf{x}^{(i)}$ is mis-classified.
For optimization, the goal is to minimize
$$\frac{1}{2} ||\mathbf{w}||^2 + C \sum_i \xi^{(i)}$$
, where $C$ is the strength of the penalty factor (like in regularization).
<img src = "./images/03_08.png" width=80%>
Using the Lagrangian multipliers ${\alpha^{(i)}, \mu^{(i)} }$ with constraints we have:
$$L = \frac{1}{2} ||\mathbf{w}||^2 + C \sum_i \xi^{(i)} - \sum_i \alpha^{(i)} \left( y^{(i)} \left( \mathbf{w}^{(i)}\mathbf{x}^{(i)} + w_0\right) - 1 + \xi^{(i)}\right) - \sum_i \mu^{(i)} \xi^{(i)}$$
, which can be solved via a similar process as in the original case without slack variables.
Coding with SVM via scikit learn is simple
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('3liCbRZPrZA')
YouTubeVideo('9NrALgHFwTo')
Explanation: Alternative implementations in scikit-learn
Solving non-linear problems using a kernel SVM
SVM can be extended for non-linear classification
This is called kernel SVM
* will explain what kernel means
* and introduce kernel tricks :-)
Intuition
The following 2D circularly distributed data sets are not linearly separable.
However, we can elevate them to a higher dimensional space for linear separable:
$
\phi(x_1, x_2) = (x_1, x_2, x_1^2 + x_2^2)
$
,
where $\phi$ is the mapping function.
<img src="./images/03_11.png" width=90%>
Animation visualization
https://youtu.be/3liCbRZPrZA
https://youtu.be/9NrALgHFwTo
End of explanation
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
X_xor = np.random.randn(200, 2)
y_xor = np.logical_xor(X_xor[:, 0] > 0,
X_xor[:, 1] > 0)
y_xor = np.where(y_xor, 1, -1)
plt.scatter(X_xor[y_xor == 1, 0],
X_xor[y_xor == 1, 1],
c='b', marker='x',
label='1')
plt.scatter(X_xor[y_xor == -1, 0],
X_xor[y_xor == -1, 1],
c='r',
marker='s',
label='-1')
plt.xlim([-3, 3])
plt.ylim([-3, 3])
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('./figures/xor.png', dpi=300)
plt.show()
Explanation: Using the kernel trick to find separating hyperplanes in higher dimensional space
For datasets that are not linearly separable, we can map them into a higher dimensional space and make them linearly separable.
Let $\phi$ be this mapping:
$\mathbf{z} = \phi(\mathbf{x})$
And we perform the linear decision in the $\mathbf{z}$ instead of the original $\mathbf{x}$ space:
$$y^{(i)} \left( \mathbf{w}^{(i)} \mathbf{z}^{(i)} + w_0\right) \geq 1 - \xi^{(i)}$$
Following similar Lagrangian multiplier optimization as above, we eventually want to optimize:
$$
\begin{align}
L &= \frac{1}{2} \sum_i \sum_j \alpha^{(i)} \alpha^{(j)} y^{(i)} y^{(j)} \left(\mathbf{z}^{(i)}\right)^T \mathbf{z}^{(j)} + \sum_i \alpha^{(i)} \
&= \frac{1}{2} \sum_i \sum_j \alpha^{(i)} \alpha^{(j)} y^{(i)} y^{(j)} \phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right) + \sum_i \alpha^{(i)}
\end{align}
$$
The key idea behind kernel trick, and kernel machines in general, is to represent the high dimensional dot product by a kernel function:
$$K\left(\mathbf{x}^{(i)}, \mathbf{x}^{(j)}\right) = \phi\left(\mathbf{x}^{(i)}\right)^T \phi\left(\mathbf{x}^{(j)}\right)$$
Intuitively, the data points become more likely to be linearly separable in a higher dimensional space.
Kernel trick for evaluation
Recall from part of our derivation above:
$$
\frac{\partial L}{\partial \mathbf{w}} = 0 \Rightarrow \mathbf{w} = \sum_i \alpha^{(i)} y^{(i)} \mathbf{z}^{(i)}
$$
Which allows us to compute the discriminant via kernel trick as well:
$$
\begin{align}
\mathbf{w}^T \mathbf{z}
&=
\sum_i \alpha^{(i)} y^{(i)} \left(\mathbf{z}^{(i)}\right)^T \mathbf{z}
\
&=
\sum_i \alpha^{(i)} y^{(i)} \phi\left(\mathbf{x}^{(i)}\right)^T \phi(\mathbf{x})
\
&=
\sum_i \alpha^{(i)} y^{(i)} K\left(\mathbf{x}^{(i)}, \mathbf{x}\right)
\end{align}
$$
Non-linear classification example
<table>
<tr> <td>x <td> y <td>xor(x, y) </tr>
<tr> <td> 0 <td> 0 <td> 0 </tr>
<tr> <td> 0 <td> 1 <td> 1 </tr>
<tr> <td> 1 <td> 0 <td> 1 </tr>
<tr> <td> 1 <td> 1 <td> 0 </tr>
</table>
Xor is not linearly separable
* math proof left as exercise
Random point sets classified via XOR based on the signs of 2D coordinates:
End of explanation
svm = SVC(kernel='rbf', random_state=0, gamma=0.10, C=10.0)
svm.fit(X_xor, y_xor)
plot_decision_regions(X_xor, y_xor,
classifier=svm)
Explanation: This is the classification result using a rbf (radial basis function) kernel
Notice the non-linear decision boundaries
End of explanation
from sklearn.svm import SVC
svm = SVC(kernel='rbf', random_state=0, gamma=0.2, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=test_idx,
xlabel = 'petal length [standardized]',
ylabel = 'petal width [standardized]')
svm = SVC(kernel='rbf', random_state=0, gamma=100.0, C=1.0)
svm.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=svm, test_idx=test_idx,
xlabel = 'petal length [standardized]',
ylabel='petal width [standardized]')
Explanation: Types of kernels
A variety of kernel functions can be used.
The only requirement is that the kernel function behaves like inner product;
larger $K(\mathbf{x}, \mathbf{y})$ for more similar $\mathbf{x}$ and $\mathbf{y}$
Linear
$
K\left(\mathbf{x}, \mathbf{y}\right) = \mathbf{x}^T \mathbf{y}
$
Polynomials of degree $q$
$
K\left(\mathbf{x}, \mathbf{y}\right) =
(\mathbf{x}^T\mathbf{y} + 1)^q
$
Example for $d=2$ and $q=2$
$$
\begin{align}
K\left(\mathbf{x}, \mathbf{y}\right) &= \left( x_1y_1 + x_2y_2 + 1 \right)^2 \
&= 1 + 2x_1y_1 + 2x_2y_2 + 2x_1x_2y_1y_2 + x_1^2y_1^2 + x_2^2y_2^2
\end{align}
$$
, which corresponds to the following kernel function:
$$
\phi(x, y) = \left[1, \sqrt{2}x, \sqrt{2}y, \sqrt{2}xy, x^2, y^2 \right]^T
$$
Radial basis function (RBF)
Scalar variance:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{\left|\mathbf{x} - \mathbf{y}\right|^2}{2s^2}}
$$
General co-variance matrix:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{1}{2} \left(\mathbf{x}-\mathbf{y}\right)^T \mathbf{S}^{-1} \left(\mathbf{x} - \mathbf{y}\right)}
$$
General distance function $D\left(\mathbf{x}, \mathbf{y}\right)$:
$$
K\left(\mathbf{x}, \mathbf{y} \right) = e^{-\frac{D\left(\mathbf{x}, \mathbf{y} \right)}{2s^2}}
$$
RBF essentially projects to an infinite dimensional space.
Sigmoid
$$
K\left(\mathbf{x}, \mathbf{y} \right) = \tanh\left(2\mathbf{x}^T\mathbf{y} + 1\right)
$$
Kernel SVM for the Iris dataset
Let's apply RBF kernel
The kernel width is controlled by a gamma $\gamma$ parameter for kernel influence
$
K\left(\mathbf{x}, \mathbf{y} \right) =
e^{-\gamma D\left(\mathbf{x}, \mathbf{y} \right)}
$
and $C$ for regularization
End of explanation
import matplotlib.pyplot as plt
import numpy as np
def gini(p):
return p * (1 - p) + (1 - p) * (1 - (1 - p))
def entropy(p):
return - p * np.log2(p) - (1 - p) * np.log2((1 - p))
def error(p):
return 1 - np.max([p, 1 - p])
x = np.arange(0.0, 1.0, 0.01)
ent = [entropy(p) if p != 0 else None for p in x]
sc_ent = [e * 0.5 if e else None for e in ent]
err = [error(i) for i in x]
fig = plt.figure()
ax = plt.subplot(111)
for i, lab, ls, c, in zip([ent, sc_ent, gini(x), err],
['Entropy', 'Entropy (scaled)',
'Gini Impurity', 'Misclassification Error'],
['-', '-', '--', '-.'],
['black', 'lightgray', 'red', 'green', 'cyan']):
line = ax.plot(x, i, label=lab, linestyle=ls, lw=2, color=c)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, 1.15),
ncol=3, fancybox=True, shadow=False)
ax.axhline(y=0.5, linewidth=1, color='k', linestyle='--')
ax.axhline(y=1.0, linewidth=1, color='k', linestyle='--')
plt.ylim([0, 1.1])
plt.xlabel('p(i=1)')
plt.ylabel('Impurity Index')
plt.tight_layout()
#plt.savefig('./figures/impurity.png', dpi=300, bbox_inches='tight')
plt.show()
Explanation: Reading
PML Chapter 3
IML Chapter 13-1 to 13.7
The kernel trick
Decision tree learning
Machine learning can be like black box/magic; the model/method works after tuning parameters and such, but how and why?
Decision tree shows you how it makes decision, e.g. classification.
Example decision tree
analogous to flow charts for designing algorithms
every internal node can be based on some if-statement
automatically learned from data, not manually programmed by human
<img src="./images/03_15.png" width=80%>
Decision tree learning
Start with a single node that contains all data
Select a node and split it via some criterion to optimize some objective, usually information/impurity $I$
Repeat until convergence:
good enough classification measured by $I$;
complex enough model (overfitting);
Each leaf node belongs to one class
Multiple leaf nodes can be of the same class
Each leaf node can have misclassified samples - majority voting
<a href="http://www.cmpe.boun.edu.tr/~ethem/i2ml2e/" title = "Figure 9.1"><img src="./images/fig9p1_iml.svg" width=80%></a>
* usually split along one dimension/feature
* a finite number of choices from the boundaries of sample classes
Maximizing information gain - getting the most bang for the buck
$I(D)$ information/impurity for a tree node with dataset $D$
Maximize information gain $IG$ for splitting each (parent) node $D_p$ into $m$ child nodes $j$:
$$
IG = I(D_p) - \sum_{j=1}^m \frac{N_j}{N_p} I(D_j)
$$
Usually $m=2$ for simplicity (binary split)
Commonly used impurity measures $I$
$p(i|t)$ - probability/proportion of dataset in node $t$ belongs to class $i$
Entropy
$$
I_H(t) = - \sum_{i=1}^c p(i|t) \log_2 p(i|t)
$$
$0$ if all samples belong to the same class
$1$ if uniform distribution
$
0.5 = p(0|t) = p(1|t)
$
<a href="https://en.wikipedia.org/wiki/Entropy_(information_theory)">Entropy (information theory)</a>
Random variable $X$ with probability mass/density function $P(X)$
Information content
$
I(X) = -\log_b\left(P(X)\right)
$
Entropy is the expectation of information
$$
H(X) = E(I(X)) = E(-\log_b(P(X)))
$$
log base $b$ can be $2$, $e$, $10$
Continuous $X$:
$$
H(X) = \int P(x) I(x) \; dx = -\int P(x) \log_b P(x) \;dx
$$
Discrete $X$:
$$
H(X) = \sum_i P(x_i) I(x_i) = -\sum_i P(x_i) \log_b P(x_i)
$$
$-\log_b P(x)$ - number of bits needed to represent $P(x)$
* the rarer the event $\rightarrow$ the less $P(x)$ $\rightarrow$ the more bits
Gini index
Minimize expected value of misclassification
$$
I_G(t) = \sum_{i=1}^c p(i|t) \left( 1 - p(i|t) \right) = 1 - \sum_{i=1}^c p(i|t)^2
$$
$p(i|t)$ - probability of class $i$
$1-p(i|t)$ - probability of misclassification, i.e. $t$ is not class $i$
Similar to entropy
* expected value of information: $-\log_2 p(i|t)$
* information and mis-classification probability: both larger for lower $p(i|t)$
Classification error
$$
I_e(t) = 1 - \max_i p(i|t)
$$
$
argmax_i \; p(i|t)
$
as the class label for node $t$
Compare different information measures
Entropy and Gini are probabilisitic
* not assuming the label of the node (decided later after more splitting)
Classification error is deterministic
* assumes the majority class would be the label
Entropy and Gini index are similar, and tend to behave better than classification error
* curves below via a 2-class case
* example in the PML textbook
End of explanation
from sklearn.tree import DecisionTreeClassifier
tree = DecisionTreeClassifier(criterion='entropy', max_depth=3, random_state=0)
tree.fit(X_train, y_train)
X_combined = np.vstack((X_train, X_test))
y_combined = np.hstack((y_train, y_test))
plot_decision_regions(X_combined, y_combined,
classifier=tree, test_idx=test_idx,
xlabel='petal length [cm]',ylabel='petal width [cm]')
Explanation: Building a decision tree
A finite number of choices for split
Split only alone boundaries of different classes
Exactly where? Maxmize margins
End of explanation
from sklearn.tree import export_graphviz
export_graphviz(tree,
out_file='tree.dot',
feature_names=['petal length', 'petal width'])
Explanation: Visualize the decision tree
End of explanation
#import pydotplus
from IPython.display import Image
from IPython.display import display
if False and Version(sklearn_version) >= '0.18':
try:
import pydotplus
dot_data = export_graphviz(
tree,
out_file=None,
# the parameters below are new in sklearn 0.18
feature_names=['petal length', 'petal width'],
class_names=['setosa', 'versicolor', 'virginica'],
filled=True,
rounded=True)
graph = pydotplus.graph_from_dot_data(dot_data)
display(Image(graph.create_png()))
except ImportError:
print('pydotplus is not installed.')
Explanation: Install Graphviz
<!--
<img src="./images/03_18.png" width=80%>
-->
dot -Tsvg tree.dot -o tree.svg
<img src="./images/tree.svg" width=80%>
Note
If you have scikit-learn 0.18 and pydotplus installed (e.g., you can install it via pip install pydotplus), you can also show the decision tree directly without creating a separate dot file as shown below. Also note that sklearn 0.18 offers a few additional options to make the decision tree visually more appealing.
End of explanation
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(criterion='entropy',
n_estimators=10,
random_state=1,
n_jobs=2)
forest.fit(X_train, y_train)
plot_decision_regions(X_combined, y_combined,
classifier=forest, test_idx=test_idx,
xlabel = 'petal length [cm]', ylabel = 'petal width [cm]')
Explanation: Decisions trees and SVM
SVM considers only margins to nearest samples to the decision boundary
Decision tree considers all samples
Case studies
Pruning a decision tree
Split until all leaf nodes are pure?
not always a good idea due to potential over-fitting
Simplify the tree via pruning
Pre-pruning
* stop splitting a node if the contained data size is below some threshold (e.g. 5% of all data)
Post-pruning
* build a tree first, and remove excessive branches
* reserve a pruning subset separate from the training data
* for each sub-tree (top-down or bottom-up), replace it with a leaf node labeled with the majority vote if not worsen performance for the pruning subset
Pre-pruning is simpler, post-pruning works better
Combining weak to strong learners via random forests
Forest = collection of trees
An example of ensemble learning (more about this later)
* combine multiple weak learners to build a strong learner
* better generalization, less overfitting
Less interpretable than a single tree
Random forest algorithm
Decide how many trees to build
To train each tree:
* Draw a random subset of samples (e.g. random sample with replacement of all samples)
* Split each node via a random subset of features (e.g. $d = \sqrt{m}$ of the original dimensionality)
(randomization is a key)
Majority vote from all trees
Code example
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=1, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
Explanation: Reading
PML Chapter 3
IML Chapter 9
Parametric versus non-parametric models
(fixed) number of parameters trained and retained
amount of data retained
trade-off between training and evaluation time
Example
Linear classifiers (SVM, perceptron)
* parameters: $\mathbf{w}$
* data throw away after training
* extreme end of parametric
Kernel SVM
* depends on the type of kernel used (exercise)
Decision tree
* parameters: decision boundaries at all nodes
* number of parameters vary depending on the training data
* data throw away after training
* less parametric than SVM
Personal take:
Parametric versus non-parametric is more of a continuous spectrum than a binary decision.
Many algorithms lie somewhere in between.
K-nearest neighbors - a lazy learning algorithm
KNN keeps all data and has no trained parameters
* extreme end of non-parametric
How it works:
* Choose the number $k$ of neighbors and a distance measure
* For each sample to classify, find the $k$ nearest neighbors in the dataset
* Assign class label via majority vote
$k$ is a hyper-parameter (picked by human), not a (ordinary) parameter (trained from data by machines)
<img src="./images/03_20.png" width=75%>
Pro:
* zero training time
* very simple
Con:
* need to keep all data
* evaluation time linearly proportional to data size (acceleration possible though, e.g. kd-tree)
* vulnerable to curse of dimensionality
Practical usage
Minkowski distance of order $p$:
$
d(\mathbf{x}, \mathbf{y}) = \sqrt[p]{\sum_k |\mathbf{x}_k - \mathbf{y}_k|^p}
$
* $p = 2$, Euclidean distance
* $p = 1$, Manhattan distance
<a href="https://en.wikipedia.org/wiki/Minkowski_distance">
<img src="https://upload.wikimedia.org/wikipedia/commons/0/00/2D_unit_balls.svg">
</a>
Number of neighbors $k$ trade-off between bias and variance
* too small $k$ - low bias, high variance
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=100, p=2, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
Explanation: Too small k can cause overfitting (high variance).
End of explanation
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=1, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier(n_neighbors=5, p=10, metric='minkowski')
knn.fit(X_train_std, y_train)
plot_decision_regions(X_combined_std, y_combined,
classifier=knn, test_idx=test_idx,
xlabel='petal length [standardized]', ylabel='petal width [standardized]')
Explanation: Too large k can cause under-fitting (high bias).
How about using different $p$ values for Minkowski distance?
End of explanation |
13,928 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The Astro 300 Python programming style guide
This notebook is a summary of the python programming style we will use in
Astro 300.
Half of your grade, on each assignment,
will be based on how well your code follows these guidelines.
These guidelines are a small subset of the
PEP 8
programming style used by python developers.
|--------|---------|---------|---------|---------|---------|---------|---------
Variable Names
use only lowercase letters [a-z] and underscores [ _ ]
no blank spaces between the characters
avoid using a single character as a variable name
The purpose of the variable should be obvious from its name
An Example
Step1: |--------|---------|---------|---------|---------|---------|---------|---------
Function Names
Use the same guidelines as for variable names
Step2: |--------|---------|---------|---------|---------|---------|---------|---------
Line Length
Limit all lines to a maximum of 79 characters
The piece that seperates the sections in this notebook is 79 characters long
If you have to scroll horizontally - your line is too long
The preferred way of wrapping long lines is by using Python's implied line continuation
inside parentheses (), brackets [] and braces {}.
Long lines can be broken over multiple lines by wrapping expressions in parentheses.
An Example | Python Code:
# Good - Full credit
mass_particle = 10.0
velocity_particle = 20.0
kinetic_energy = 0.5 * mass_particle * (velocity_particle ** 2)
print(kinetic_energy)
# Bad - Half credit at best
x = 10.0
y = 20.0
print(0.5*x*y**2)
# Really bad - no credit
print(0.5*10*20**2)
Explanation: The Astro 300 Python programming style guide
This notebook is a summary of the python programming style we will use in
Astro 300.
Half of your grade, on each assignment,
will be based on how well your code follows these guidelines.
These guidelines are a small subset of the
PEP 8
programming style used by python developers.
|--------|---------|---------|---------|---------|---------|---------|---------
Variable Names
use only lowercase letters [a-z] and underscores [ _ ]
no blank spaces between the characters
avoid using a single character as a variable name
The purpose of the variable should be obvious from its name
An Example:
You want to find the kinetic energy of a particle with mass 10 and velocity 20:
$$ \mathrm{Kinetic\ Energy}\ = \frac{1}{2}\ mv^2 $$
End of explanation
# Good
def find_kinetic_energy(mass_part, velocity_part):
kinetic_energy = 0.5 * mass_part * velocity_part ** 2
return(kinetic_energy)
# Bad
def KE(x,y):
return(0.5*x*y**2)
# Good
mass_particle = 10.0
velocity_particle = 20.0
kinetic_energy = find_kinetic_energy(mass_particle,velocity_particle)
print(kinetic_energy)
# Bad
print(KE(10,20))
Explanation: |--------|---------|---------|---------|---------|---------|---------|---------
Function Names
Use the same guidelines as for variable names
End of explanation
# Some variables to use in an equation
gross_wages = 50000
taxable_interest = 1000
dividends = 50
qualified_dividends = 10
ira_deduction = 2000
student_loan_interest = 3000
medical_deduction = 1000
# Good
income = (gross_wages
+ taxable_interest
+ (dividends - qualified_dividends)
- ira_deduction
- student_loan_interest
- medical_deduction)
print(income)
# Bad
income = gross_wages + taxable_interest + (dividends - qualified_dividends) - ira_deduction - student_loan_interest - medical_deduction
print(income)
Explanation: |--------|---------|---------|---------|---------|---------|---------|---------
Line Length
Limit all lines to a maximum of 79 characters
The piece that seperates the sections in this notebook is 79 characters long
If you have to scroll horizontally - your line is too long
The preferred way of wrapping long lines is by using Python's implied line continuation
inside parentheses (), brackets [] and braces {}.
Long lines can be broken over multiple lines by wrapping expressions in parentheses.
An Example:
End of explanation |
13,929 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This details how the requires decorator can be used.
Step1: The function takes an arbitrary number of strings that describe the dependencies introduced by the class or function.
Step2: So, neat. What if we didn't have the module needed?
Step3: So, if the requirements aren't met, your original function is replaced with a new function.
it looks kind of like this
Step4: By deafult, the function is replaced with a verbose version. If you pass verbose=False, then the function gets replaced with a nonprinting version.
Step5: The cool thing is, this works on class definitions as well | Python Code:
from opt import requires
Explanation: This details how the requires decorator can be used.
End of explanation
@requires('pandas')
def test():
import pandas
print('yay pandas version {}'.format(pandas.__version__))
test()
Explanation: The function takes an arbitrary number of strings that describe the dependencies introduced by the class or function.
End of explanation
@requires("notarealmodule")
def test2():
print("you shouldn't see this")
test2()
Explanation: So, neat. What if we didn't have the module needed?
End of explanation
def passer():
if verbose:
missing = [arg for i,arg in enumerate(args) if not available[i]]
print("missing dependencies: {d}".format(d=missing))
print("not running {}".format(function.__name__))
else:
pass
Explanation: So, if the requirements aren't met, your original function is replaced with a new function.
it looks kind of like this:
End of explanation
@requires("notarealmodule", verbose=False)
def test3():
print("you shouldn't see this, either")
test3()
Explanation: By deafult, the function is replaced with a verbose version. If you pass verbose=False, then the function gets replaced with a nonprinting version.
End of explanation
@requires("notarealmodule")
class OLS_mock(object):
def __init__(self, *args, **kwargs):
for arg in args:
print(arg)
OLS_mock(1,2,3,4,5, w='Tom')
@requires("pymc3")
class BayesianHLM(object):
def __init__(self, *args, **kwargs):
print(arg)for arg in args[0:]:
BayesianHLM(1,2,3,4,5, w='Tom')
Explanation: The cool thing is, this works on class definitions as well:
End of explanation |
13,930 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Processing time with pandas
We've touched a little bit on time so far - mostly how tragic it is to parse - but pandas can do some neat things with it once you figure out how it works.
Let's open up some data from the Census bureau - we're going to use New Home Sales. The data is formatted... oddly, so I've done the importing and joining for you below.
Step1: Changing the index to the datetime
Normally the index of the column is just a number.
Step2: It's the column on the far left - 0, 1, 2, 3, 4... boring and useless! If we replace the index with the datetime, though, we can start to have some fun
Step3: Selecting specific(-ish) dates via the index
Now that our index is a datetime, we can select date ranges much more easily.
Step4: List slices with datetimes
We can also use list slicing with datetimes!
Just for review, you can use
Step5: Instead of using boring ol' numbers, we can use dates instead.
Step6: Info on our time series
If you try to .plot, pandas will automatically use the index (the date) as the x axis for you.
Step7: Hmmm, looks like something might have happened at some point. Maybe we want to see some numbers instead of a graph? To do aggregate statistics on time series in pandas we use a method called .resample(), and we're going to tell it to group the data by year.
Step8: That still looks like too much data! What about every decade?
Step9: Cyclical data
It seems like winter might be a time where not very many houses are sold. Let's see if that's true!
Step10: More details
You can also use max and min and all of your other aggregate friends with .resample. For example, what's the largest number of houses hold in a given year? | Python Code:
data_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='data')
data_df.head()
categories_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='categories')
data_types_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='data_types')
error_types_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='error_types')
geo_levels_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='geo_levels')
periods_df = pd.read_excel("RESSALES-mf.xlsx", sheetname='periods')
categories_df.head(2)
# it auto-merges cat_idx in our original dataframe with cat_idx in categories_df
# it auto-merges dt_idx in our original dataframe with dt_idx in data_types_df
# it auto-merges geo_idx in our original dataframe with geo_idx in geo_levels_df
# it auto-merges per_idx in our original dataframe with per_idx in periods_df
df = data_df.merge(categories_df).merge(data_types_df).merge(geo_levels_df).merge(periods_df)
# We only want to look at the total number of homes sold across entire the united states
df = df[(df['cat_code'] == 'SOLD') & (df['geo_code'] == 'US') & (df['dt_code'] == 'TOTAL')]
# We don't merge error_types_df because all of the errors are the same
df['et_idx'].value_counts()
df.head(2)
# Now let's remove the join columns to keep things clean
df = df.drop(['per_idx', 'cat_idx', 'dt_idx', 'et_idx', 'geo_idx'], axis=1)
df.head()
# At least we can see 'per_name' (period name) is already a datetime!
df.info()
Explanation: Processing time with pandas
We've touched a little bit on time so far - mostly how tragic it is to parse - but pandas can do some neat things with it once you figure out how it works.
Let's open up some data from the Census bureau - we're going to use New Home Sales. The data is formatted... oddly, so I've done the importing and joining for you below.
End of explanation
df.head(3)
Explanation: Changing the index to the datetime
Normally the index of the column is just a number.
End of explanation
# First we move it over into the index column
df.index = df['per_name']
df.head(2)
# Then we delete the per_name column because we don't need it any more...
del df['per_name']
df.head(2)
Explanation: It's the column on the far left - 0, 1, 2, 3, 4... boring and useless! If we replace the index with the datetime, though, we can start to have some fun
End of explanation
# Everything in March, 1963
df['1963-3']
# Everything in 2010
df['2010']
Explanation: Selecting specific(-ish) dates via the index
Now that our index is a datetime, we can select date ranges much more easily.
End of explanation
# Make our list of fruits
ranked_fruits = ('banana', 'orange', 'apple', 'blueberries', 'strawberries')
# Start from the beginning, get the first two
ranked_fruits[:2]
# Start from two, get up until the fourth element
ranked_fruits[2:4]
# Starting from the third element, get all the rest
ranked_fruits[3:]
Explanation: List slices with datetimes
We can also use list slicing with datetimes!
Just for review, you can use : to only select certain parts of a list:
End of explanation
# Everything after 2001
df["2001":]
# Everything between June 1990 and March 1995
df["1990-06":"1995-03"]
Explanation: Instead of using boring ol' numbers, we can use dates instead.
End of explanation
df.plot(y='val')
Explanation: Info on our time series
If you try to .plot, pandas will automatically use the index (the date) as the x axis for you.
End of explanation
# http://stackoverflow.com/a/17001474 gives us a list of what we can pass to 'resample'
df.resample('A').median()
Explanation: Hmmm, looks like something might have happened at some point. Maybe we want to see some numbers instead of a graph? To do aggregate statistics on time series in pandas we use a method called .resample(), and we're going to tell it to group the data by year.
End of explanation
# If 'A' is every year, 10A is every 5 years
df.resample('5A').median()
# We can graph these!
df.plot(y='val', label="Monthly")
df.resample('A').median().plot(y='val', label="Annual")
df.resample('10A').median().plot(y='val', label="Decade")
# We can graph these ALL ON THE SAME PLOT!
# we store the 'ax' from the first .plot and pass it to the others
ax = df.plot(y='val', label="Monthly")
df.resample('A').median().plot(y='val', ax=ax, label="Annual")
df.resample('10A').median().plot(y='val', ax=ax, label="Decade")
# Which year had the worst month?
df.resample('A').median()
Explanation: That still looks like too much data! What about every decade?
End of explanation
# Group by the month, check the median
df.groupby(by=df.index.month).median()
# Group by the month, check the median, plot the results
df.groupby(by=df.index.month).median().plot(y='val')
# Group by the month, check the median, plot the results
ax = df.groupby(by=df.index.month).median().plot(y='val', legend=False)
ax.set_xticks([1,2,3,4,5,6,7,8,9,10,11, 12])
ax.set_xticklabels(['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'])
ax.set_ylabel("Houses sold (in thousands)")
ax.set_title("House sales by month, 1963-2016")
Explanation: Cyclical data
It seems like winter might be a time where not very many houses are sold. Let's see if that's true!
End of explanation
df.resample('A')['val'].max().plot()
# The fewest?
df.resample('A')['val'].min().plot()
# We now know we can look at the range
ax = df.resample('A')['val'].median().plot()
df.resample('A')['val'].max().plot(ax=ax)
df.resample('A')['val'].min().plot(ax=ax)
# We now know we can look at the range IN AN EVEN COOLER WAY
ax = df.resample('A')['val'].median().plot()
x_values = df.resample('A').index
min_values = df.resample('A')['val'].min()
max_values = df.resample('A')['val'].max()
ax.fill_between(x_values, min_values, max_values, alpha=0.5)
ax.set_ylim([0,130])
ax.set_ylabel("Houses sold (in thousands)")
ax.set_xlabel("Year")
ax.set_title("The Housing Bubble")
Explanation: More details
You can also use max and min and all of your other aggregate friends with .resample. For example, what's the largest number of houses hold in a given year?
End of explanation |
13,931 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Neural style transfer
Author
Step1: Let's take a look at our base (content) image and our style reference image
Step2: Image preprocessing / deprocessing utilities
Step3: Compute the style transfer loss
First, we need to define 4 utility functions
Step4: Next, let's create a feature extraction model that retrieves the intermediate activations
of VGG19 (as a dict, by name).
Step5: Finally, here's the code that computes the style transfer loss.
Step6: Add a tf.function decorator to loss & gradient computation
To compile it, and thus make it fast.
Step7: The training loop
Repeatedly run vanilla gradient descent steps to minimize the loss, and save the
resulting image every 100 iterations.
We decay the learning rate by 0.96 every 100 steps.
Step8: After 4000 iterations, you get the following result | Python Code:
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.applications import vgg19
base_image_path = keras.utils.get_file("paris.jpg", "https://i.imgur.com/F28w3Ac.jpg")
style_reference_image_path = keras.utils.get_file(
"starry_night.jpg", "https://i.imgur.com/9ooB60I.jpg"
)
result_prefix = "paris_generated"
# Weights of the different loss components
total_variation_weight = 1e-6
style_weight = 1e-6
content_weight = 2.5e-8
# Dimensions of the generated picture.
width, height = keras.preprocessing.image.load_img(base_image_path).size
img_nrows = 400
img_ncols = int(width * img_nrows / height)
Explanation: Neural style transfer
Author: fchollet<br>
Date created: 2016/01/11<br>
Last modified: 2020/05/02<br>
Description: Transfering the style of a reference image to target image using gradient descent.
Introduction
Style transfer consists in generating an image
with the same "content" as a base image, but with the
"style" of a different picture (typically artistic).
This is achieved through the optimization of a loss function
that has 3 components: "style loss", "content loss",
and "total variation loss":
The total variation loss imposes local spatial continuity between
the pixels of the combination image, giving it visual coherence.
The style loss is where the deep learning keeps in --that one is defined
using a deep convolutional neural network. Precisely, it consists in a sum of
L2 distances between the Gram matrices of the representations of
the base image and the style reference image, extracted from
different layers of a convnet (trained on ImageNet). The general idea
is to capture color/texture information at different spatial
scales (fairly large scales --defined by the depth of the layer considered).
The content loss is a L2 distance between the features of the base
image (extracted from a deep layer) and the features of the combination image,
keeping the generated image close enough to the original one.
Reference: A Neural Algorithm of Artistic Style
Setup
End of explanation
from IPython.display import Image, display
display(Image(base_image_path))
display(Image(style_reference_image_path))
Explanation: Let's take a look at our base (content) image and our style reference image
End of explanation
def preprocess_image(image_path):
# Util function to open, resize and format pictures into appropriate tensors
img = keras.preprocessing.image.load_img(
image_path, target_size=(img_nrows, img_ncols)
)
img = keras.preprocessing.image.img_to_array(img)
img = np.expand_dims(img, axis=0)
img = vgg19.preprocess_input(img)
return tf.convert_to_tensor(img)
def deprocess_image(x):
# Util function to convert a tensor into a valid image
x = x.reshape((img_nrows, img_ncols, 3))
# Remove zero-center by mean pixel
x[:, :, 0] += 103.939
x[:, :, 1] += 116.779
x[:, :, 2] += 123.68
# 'BGR'->'RGB'
x = x[:, :, ::-1]
x = np.clip(x, 0, 255).astype("uint8")
return x
Explanation: Image preprocessing / deprocessing utilities
End of explanation
# The gram matrix of an image tensor (feature-wise outer product)
def gram_matrix(x):
x = tf.transpose(x, (2, 0, 1))
features = tf.reshape(x, (tf.shape(x)[0], -1))
gram = tf.matmul(features, tf.transpose(features))
return gram
# The "style loss" is designed to maintain
# the style of the reference image in the generated image.
# It is based on the gram matrices (which capture style) of
# feature maps from the style reference image
# and from the generated image
def style_loss(style, combination):
S = gram_matrix(style)
C = gram_matrix(combination)
channels = 3
size = img_nrows * img_ncols
return tf.reduce_sum(tf.square(S - C)) / (4.0 * (channels ** 2) * (size ** 2))
# An auxiliary loss function
# designed to maintain the "content" of the
# base image in the generated image
def content_loss(base, combination):
return tf.reduce_sum(tf.square(combination - base))
# The 3rd loss function, total variation loss,
# designed to keep the generated image locally coherent
def total_variation_loss(x):
a = tf.square(
x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, 1:, : img_ncols - 1, :]
)
b = tf.square(
x[:, : img_nrows - 1, : img_ncols - 1, :] - x[:, : img_nrows - 1, 1:, :]
)
return tf.reduce_sum(tf.pow(a + b, 1.25))
Explanation: Compute the style transfer loss
First, we need to define 4 utility functions:
gram_matrix (used to compute the style loss)
The style_loss function, which keeps the generated image close to the local textures
of the style reference image
The content_loss function, which keeps the high-level representation of the
generated image close to that of the base image
The total_variation_loss function, a regularization loss which keeps the generated
image locally-coherent
End of explanation
# Build a VGG19 model loaded with pre-trained ImageNet weights
model = vgg19.VGG19(weights="imagenet", include_top=False)
# Get the symbolic outputs of each "key" layer (we gave them unique names).
outputs_dict = dict([(layer.name, layer.output) for layer in model.layers])
# Set up a model that returns the activation values for every layer in
# VGG19 (as a dict).
feature_extractor = keras.Model(inputs=model.inputs, outputs=outputs_dict)
Explanation: Next, let's create a feature extraction model that retrieves the intermediate activations
of VGG19 (as a dict, by name).
End of explanation
# List of layers to use for the style loss.
style_layer_names = [
"block1_conv1",
"block2_conv1",
"block3_conv1",
"block4_conv1",
"block5_conv1",
]
# The layer to use for the content loss.
content_layer_name = "block5_conv2"
def compute_loss(combination_image, base_image, style_reference_image):
input_tensor = tf.concat(
[base_image, style_reference_image, combination_image], axis=0
)
features = feature_extractor(input_tensor)
# Initialize the loss
loss = tf.zeros(shape=())
# Add content loss
layer_features = features[content_layer_name]
base_image_features = layer_features[0, :, :, :]
combination_features = layer_features[2, :, :, :]
loss = loss + content_weight * content_loss(
base_image_features, combination_features
)
# Add style loss
for layer_name in style_layer_names:
layer_features = features[layer_name]
style_reference_features = layer_features[1, :, :, :]
combination_features = layer_features[2, :, :, :]
sl = style_loss(style_reference_features, combination_features)
loss += (style_weight / len(style_layer_names)) * sl
# Add total variation loss
loss += total_variation_weight * total_variation_loss(combination_image)
return loss
Explanation: Finally, here's the code that computes the style transfer loss.
End of explanation
@tf.function
def compute_loss_and_grads(combination_image, base_image, style_reference_image):
with tf.GradientTape() as tape:
loss = compute_loss(combination_image, base_image, style_reference_image)
grads = tape.gradient(loss, combination_image)
return loss, grads
Explanation: Add a tf.function decorator to loss & gradient computation
To compile it, and thus make it fast.
End of explanation
optimizer = keras.optimizers.SGD(
keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate=100.0, decay_steps=100, decay_rate=0.96
)
)
base_image = preprocess_image(base_image_path)
style_reference_image = preprocess_image(style_reference_image_path)
combination_image = tf.Variable(preprocess_image(base_image_path))
iterations = 4000
for i in range(1, iterations + 1):
loss, grads = compute_loss_and_grads(
combination_image, base_image, style_reference_image
)
optimizer.apply_gradients([(grads, combination_image)])
if i % 100 == 0:
print("Iteration %d: loss=%.2f" % (i, loss))
img = deprocess_image(combination_image.numpy())
fname = result_prefix + "_at_iteration_%d.png" % i
keras.preprocessing.image.save_img(fname, img)
Explanation: The training loop
Repeatedly run vanilla gradient descent steps to minimize the loss, and save the
resulting image every 100 iterations.
We decay the learning rate by 0.96 every 100 steps.
End of explanation
display(Image(result_prefix + "_at_iteration_4000.png"))
Explanation: After 4000 iterations, you get the following result:
End of explanation |
13,932 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bloques, índices y un primer ejercicio
Hablaremos de la indexación que normalmente se utiliza en los códigos de CUDA C y que además nos ayudará a comprender mejor los bloques.
Sin más, presentamos el índice idx que, por cierto, se utilizará en el ejercicio a continuación.
C
idx = blockIdx.x * blockDim.x + threadIdx.x ;
Los componentes de idx son lo suficientemente claros. blockIdx.x se refiere al índice del bloque dentro de una malla, mientras que threadIdx.x se refiere al índice de un thread dentro del block en el que se encuentra.
blockDim.x, por otra parte, se refiere a la dimensión del bloque en la dirección x. Sencillo, ¿no?
Estamos suponiendo que únicamente tenemos una dimensión, pero todo esto es análogo en dos o tres dimensiones.
Para ilustrar la magia de este índice, supongamos que tenemos un vector unidimensional con 16 entradas, dividido en 4 bloques de 4 entradas.
Entonces blockDim.x = 4 y es un valor fijo.
blockIdx.x y threadIdx.x van de 0 a 3.
En el primer bloque (blockIdx.x = 0), idx = 0 * 4 + threadIdx.x irá de 0 a 3.
En el segundo (blockIdx.x = 1), idx = 4 + threadIdx.x empezará en 4 y terminará en 7.
Así, sucesivamente hasta terminar en 15 y contando así las 16 entradas de nuestro vector.
Ahora bien. ¿Dónde podemos fijar las dimensiones de los bloques y mallas?
CUDA C ha creado unas variables llamadas dim3 con las que podemos fijar muy sencillamente las dimensiones de estos objetos. Su sintáxis es muy sencilla
Step1: El resultado final tendría que ser igual a 333300.0. Si tu resultado es correcto, ¡felicidades! has escrito correctamente tu primer código en CUDA C.
En caso de que no logres encontrar el resultado correcto y ya estés harto de intentar, recuerda que nos puedes contactar a nuestros correos (que están en el primer notebook). | Python Code:
%%writefile Programas/Mul_vectores.cu
__global__ void multiplicar_vectores(float * device_A, float * device_B, float * device_C, int TAMANIO)
{
// Llena el kernel escribiendo la multiplicacion de los vectores A y B
}
int main( int argc, char * argv[])
{
int TAMANIO 1000 ;
float h_A[TAMANIO] ;
float h_B[TAMANIO] ;
float h_C[TAMANIO] ;
float prueba ;
for (int i = 0, i < TAMANIO, i ++)
{
h_A[i] = i ;
h_B[i] = i + 1 ;
}
// Escribe abajo las lineas para la alocacion de memoria
// Escribe abajo las lineas para copia de memoria del CPU al GPU
// Completa para escribir las dimensiones de los bloques
dim3 dimBlock( ) ;
dim3 dimGrid( ) ;
// Completa para lanzar el kernel
multiplicar_vectores<<< dimGrid, dimBlock >>>( ) ;
// Copia la memoria del GPU al CPU
// Aqui abajo YA ESTAN ESCRITAS las lineas para liberar la memoria
cudaFree(d_A) ;
cudaFree(d_B) ;
cudaFree(d_C) ;
// Aqui abajo un pequenio codigo para poder saber si tu resultado es correcto
prueba = 0. ;
for (int i = 0, i < TAMANIO, i ++)
{
prueba += h_C[i] ;
}
println( prueba) ;
return 0;
}
!nvcc -o Programas/Mul_vectores Programas/Mul_vectores.cu
Explanation: Bloques, índices y un primer ejercicio
Hablaremos de la indexación que normalmente se utiliza en los códigos de CUDA C y que además nos ayudará a comprender mejor los bloques.
Sin más, presentamos el índice idx que, por cierto, se utilizará en el ejercicio a continuación.
C
idx = blockIdx.x * blockDim.x + threadIdx.x ;
Los componentes de idx son lo suficientemente claros. blockIdx.x se refiere al índice del bloque dentro de una malla, mientras que threadIdx.x se refiere al índice de un thread dentro del block en el que se encuentra.
blockDim.x, por otra parte, se refiere a la dimensión del bloque en la dirección x. Sencillo, ¿no?
Estamos suponiendo que únicamente tenemos una dimensión, pero todo esto es análogo en dos o tres dimensiones.
Para ilustrar la magia de este índice, supongamos que tenemos un vector unidimensional con 16 entradas, dividido en 4 bloques de 4 entradas.
Entonces blockDim.x = 4 y es un valor fijo.
blockIdx.x y threadIdx.x van de 0 a 3.
En el primer bloque (blockIdx.x = 0), idx = 0 * 4 + threadIdx.x irá de 0 a 3.
En el segundo (blockIdx.x = 1), idx = 4 + threadIdx.x empezará en 4 y terminará en 7.
Así, sucesivamente hasta terminar en 15 y contando así las 16 entradas de nuestro vector.
Ahora bien. ¿Dónde podemos fijar las dimensiones de los bloques y mallas?
CUDA C ha creado unas variables llamadas dim3 con las que podemos fijar muy sencillamente las dimensiones de estos objetos. Su sintáxis es muy sencilla:
```C
dim3 dimBlock(4, 1, 1) ;
dim3 dimGrid(4, 1, 1) ;
```
Las variables dimBlock y dimGrid fueron escritas como para el ejemplo pasado. La sintáxis es, como se puede intuir, la ya establecida (x, y, z), por lo que las dos variables recién escritas se refieren a una malla unidimensional (en la dirección x) con 4 bloques. Cada uno de estos últimos también será unidimensional en la dirección x y contará con 4 threads.
Ejercicio: Multiplicación de Vectores
Ahora veremos como escribir nuestro primer código en CUDA C para realizar un ejercicio básico: la multiplicación de vectores.
La dinámica será esta: A continuación pondremos una parte del código. El lector sólo tendrá que llenar las partes faltantes. Prácticamente todos los elementos para poder completar este primer códido están presentes en el notebook anterior, por lo que no hay que dudar en tomarlo como referencia.
Así de fácil.
Nota: las partes que faltan por llenar en este código son
+ el kernel
+ la alocación, copia y liberación de memoria
+ dimensiones de bloques y grid
End of explanation
suma = 0.
for i in xrange(100):
suma += i*(i+1)
suma
# Estas cuatro lineas hacen lo mismo que todo el código que hicimos anteriormente
# No se desanimen :(, los verdaderos resultados vendrán muy prontamente.
Explanation: El resultado final tendría que ser igual a 333300.0. Si tu resultado es correcto, ¡felicidades! has escrito correctamente tu primer código en CUDA C.
En caso de que no logres encontrar el resultado correcto y ya estés harto de intentar, recuerda que nos puedes contactar a nuestros correos (que están en el primer notebook).
End of explanation |
13,933 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Atoms
Atoms are defined as a list of strings here.
Step1: Generate some random coordinates
Making some random $x,y,z$ coordinates up
Step2: Molecule
Here we've defined a molecule as a dictionary of atom objects.
We'll loop over the number of atoms, and initialise each dictionary key with an atom.
In an actual run, we'll need to parse information from an xyz file somehow instead.
Step3: Calculate the Coulomb matrix
Call our function, using the Molecule dictionary as input to calculate a Coulomb matrix. | Python Code:
Atoms = ["C", "B", "H"]
Explanation: Atoms
Atoms are defined as a list of strings here.
End of explanation
Coordinates = []
for AtomNumber in range(len(Atoms)):
Coordinates.append(np.random.rand(3))
Explanation: Generate some random coordinates
Making some random $x,y,z$ coordinates up
End of explanation
Molecule = dict()
for Index, Atom in enumerate(Atoms):
Molecule[Index] = __Atom__(Atom, Coordinates[Index])
Explanation: Molecule
Here we've defined a molecule as a dictionary of atom objects.
We'll loop over the number of atoms, and initialise each dictionary key with an atom.
In an actual run, we'll need to parse information from an xyz file somehow instead.
End of explanation
CM = CalculateCoulombMatrix(Molecule)
def ReadXYZ(File):
f = open(File, "r")
fc = f.readlines()
f.close()
NAtoms = len(fc)
Coordinates = []
for line in range(NAtoms):
Coordinates.append(fc[line].split())
return Coordinates
Explanation: Calculate the Coulomb matrix
Call our function, using the Molecule dictionary as input to calculate a Coulomb matrix.
End of explanation |
13,934 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Welcome to your assignment! Do the questions and write the answers in the code cells provided below.
Here's a bit of setup for you.
Step1: Question 1
Step2: Question 2
Step3: Question 3
Step4: Question 4 | Python Code:
%pylab inline
import numpy as np
Explanation: Welcome to your assignment! Do the questions and write the answers in the code cells provided below.
Here's a bit of setup for you.
End of explanation
a =
# test_shape
assert a.shape == (100, 100), "Shape is wrong :("
Explanation: Question 1: Create a 100 x 100 numpy array filled with random floats and assign it to the variable a.
End of explanation
b =
# test_scalar_multiply
np.testing.assert_allclose(b, a*10)
Explanation: Question 2: multiply all values in a by 10 and assign the result to a variable b.
End of explanation
c = 10
Explanation: Question 3: plot b as a scatterplot.
Question 4: here is some code that should not be changed.
End of explanation
d = c * 10
# test d value
assert d == 100
# test that has a syntax error
assert # Syntax error: missing expression after assert
Explanation: Question 4: define a variable d that is equal to 10 times c.
End of explanation |
13,935 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fisher's Iris data set is a collection of measurements commonly used to discuss various example algorithms. It is popular due to the fact that it consists of multiple dimensions, a large enough set of samples to perform most basic statistics, and uses a set of measurements that is understandable by most people.
Here, I will use the iris data set to discuss some basic machine learning algorithms. I will begin with some visuals to help understand the data, then perform some supervised algorithms to better characterize the data. I will conclude with a demonstration of a Support Vector Machine (SVM) classifier.
I start with some standard imports, by loading the iris data and shaping it into a Pandas dataframe for better manipulation.
Step1: You should see that Species 0 (setosa) is quickly distinguishable from Species 1 (versicolor) and 2 (virginica).
I will now demonstrate how to calculate various descriptive statistics.
Before showing the code, I want to remind readers of the pitfall of relying entirely on descriptive statistics; Anscombe's quartet is a collection of 4 sets, each set consisting of eleven points. All of the sets have similar descriptive statistics but are visually very different.
Step4: We can from both the scatterplots and the hi/low plots that petal length is sufficient to discriminate Species 0 from the other two
I will conclude with demonstrating how to use SVM to predict Species. I start with some helper functions
Step5: And we can now instantiate and train a model | Python Code:
import matplotlib.pyplot as plt
from sklearn import datasets, svm
from sklearn.decomposition import PCA
import seaborn as sns
import pandas as pd
import numpy as np
# import some data to play with
iris = datasets.load_iris()
dfX = pd.DataFrame(iris.data,columns = ['sepal_length','sepal_width','petal_length','petal_width'])
dfY = pd.DataFrame(iris.target,columns=['species'])
dfX['species'] = dfY
print(dfX.head())
sns.pairplot(dfX, hue="species")
Explanation: Fisher's Iris data set is a collection of measurements commonly used to discuss various example algorithms. It is popular due to the fact that it consists of multiple dimensions, a large enough set of samples to perform most basic statistics, and uses a set of measurements that is understandable by most people.
Here, I will use the iris data set to discuss some basic machine learning algorithms. I will begin with some visuals to help understand the data, then perform some supervised algorithms to better characterize the data. I will conclude with a demonstration of a Support Vector Machine (SVM) classifier.
I start with some standard imports, by loading the iris data and shaping it into a Pandas dataframe for better manipulation.
End of explanation
#find and print mean, median, 95% intervals
print('mean')
print(dfX.groupby('species').mean())
print('median')
print(dfX.groupby('species').median())
print('two-σ interval')
dfX_high = dfX.groupby('species').mean() + 2*dfX.groupby('species').std()
dfX_low = dfX.groupby('species').mean() - 2*dfX.groupby('species').std()
df = pd.DataFrame()
for C in dfX_high.columns:
df[C + '_hilo'] = dfX_high[C].astype(str) +'_' + dfX_low[C].astype(str)
print(df)
Explanation: You should see that Species 0 (setosa) is quickly distinguishable from Species 1 (versicolor) and 2 (virginica).
I will now demonstrate how to calculate various descriptive statistics.
Before showing the code, I want to remind readers of the pitfall of relying entirely on descriptive statistics; Anscombe's quartet is a collection of 4 sets, each set consisting of eleven points. All of the sets have similar descriptive statistics but are visually very different.
End of explanation
def make_meshgrid(x, y, h=.02):
Create a mesh of points to plot in
Parameters
----------
x: data to base x-axis meshgrid on
y: data to base y-axis meshgrid on
h: stepsize for meshgrid, optional
Returns
-------
xx, yy : ndarray
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
Plot the decision boundaries for a classifier.
Parameters
----------
ax: matplotlib axes object
clf: a classifier
xx: meshgrid ndarray
yy: meshgrid ndarray
params: dictionary of params to pass to contourf, optional
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
out = ax.contourf(xx, yy, Z, **params)
return out
Explanation: We can from both the scatterplots and the hi/low plots that petal length is sufficient to discriminate Species 0 from the other two
I will conclude with demonstrating how to use SVM to predict Species. I start with some helper functions
End of explanation
dfX = pd.DataFrame(iris.data,columns = ['sepal_length','sepal_width','petal_length','petal_width'])
X = iris.data[:, :2]
C = 1.0 # SVM regularization parameter
clf = svm.SVC(kernel='rbf', gamma=0.7, C=C)
clf = clf.fit(X, dfY)
title = 'SVC with RBF kernel'
fig, ax = plt.subplots()
X0, X1 = X[:, 0], X[:, 1]
xx, yy = make_meshgrid(X0, X1)
plot_contours(ax, clf, xx, yy,cmap=plt.cm.coolwarm, alpha=0.8)
ax.scatter(X0, X1, c=y, cmap=plt.cm.coolwarm, s=20, edgecolors='k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Sepal length')
ax.set_ylabel('Sepal width')
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title)
plt.show()
Explanation: And we can now instantiate and train a model
End of explanation |
13,936 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial Part 15
Step2: Make The Datasets
Because ScScore is trained on relative complexities we have our X tensor in our dataset has 3 dimensions (sample_id, molecule_id, features). the 1st dimension molecule_id is in [0, 1], because a sample is a pair of molecules. The label is 1 if the zeroth molecule is more complex than the first molecule. The function create_dataset we introduce below pulls random pairs of smiles strings out of a given list and ranks them according to this complexity measure.
In the real world you could use purchase cost, or number of reaction steps required as your complexity score.
Step3: With our complexity ranker in place we can now construct our dataset. Let's start by loading the molecules in the Tox21 dataset into memory. We split the dataset at this stage to ensure that the training and test set have non-overlapping sets of molecules.
Step4: We'll featurize all our molecules with the ECFP fingerprint with chirality (matching the source paper), and will then construct our pairwise dataset using the code from above.
Step5: Now that we have our dataset created, let's train a ScScoreModel on this dataset.
Step6: Model Performance
Lets evaluate how well the model does on our holdout molecules. The SaScores should track the length of SMILES strings from never before seen molecules.
Step7: Let's now plot the length of the smiles string of the molecule against the SaScore using matplotlib. | Python Code:
%tensorflow_version 1.x
!curl -Lo deepchem_installer.py https://raw.githubusercontent.com/deepchem/deepchem/master/scripts/colab_install.py
import deepchem_installer
%time deepchem_installer.install(version='2.3.0')
import deepchem as dc
# Lets get some molecules to play with
from deepchem.molnet.load_function import tox21_datasets
tasks, datasets, transformers = tox21_datasets.load_tox21(featurizer='Raw', split=None, reload=False)
molecules = datasets[0].X
Explanation: Tutorial Part 15: Synthetic Feasibility
Synthetic feasibility is a problem when running large scale enumerations. Ofen molecules that are enumerated are very difficult to make and thus not worth inspection even if their other chemical properties are good in silico. This tutorial goes through how to train the ScScore model [1].
The idea of the model is to train on pairs of molecules where one molecule is "more complex" than the other. The neural network then can make scores which attempt to keep this pairwise ordering of molecules. The final result is a model which can give a relative complexity of a molecule.
The paper trains on every reaction in reaxys, declaring products more complex than reactions. Since this training set is prohibitively expensive we will instead train on arbitrary molecules declaring one more complex if it's SMILES string is longer. In the real world you can use whatever measure of complexity makes sense for the project.
In this tutorial, we'll use the Tox21 dataset to train our simple synthetic feasibility model.
Colab
This tutorial and the rest in this sequence are designed to be done in Google colab. If you'd like to open this notebook in colab, you can use the following link.
Setup
We recommend you run this tutorial on Google colab. You'll need to run the following commands to set up your colab environment to run the notebook.
End of explanation
from rdkit import Chem
import random
from deepchem.feat import CircularFingerprint
import deepchem as dc
import numpy as np
def create_dataset(fingerprints, smiles_lens, ds_size=100000):
m1: list of np.Array
fingerprints for molecules
m2: list of int
length of a molecules SMILES string
returns:
dc.data.Dataset for input into ScScore Model
Dataset.X
shape is (sample_id, molecule_id, features)
Dataset.y
shape is (sample_id,)
values is 1 if the 0th index molecule is more complex
0 if the 1st index molecule is more complex
X, y = [], []
all_data = list(zip(fingerprints, smiles_lens))
while len(y) < ds_size:
i1 = random.randrange(0, len(smiles_lens))
i2 = random.randrange(0, len(smiles_lens))
m1 = all_data[i1]
m2 = all_data[i2]
if m1[1] == m2[1]:
continue
if m1[1] > m2[1]:
y.append(1.0)
else:
y.append(0.0)
X.append([m1[0], m2[0]])
return dc.data.NumpyDataset(np.array(X), np.expand_dims(np.array(y), axis=1))
Explanation: Make The Datasets
Because ScScore is trained on relative complexities we have our X tensor in our dataset has 3 dimensions (sample_id, molecule_id, features). the 1st dimension molecule_id is in [0, 1], because a sample is a pair of molecules. The label is 1 if the zeroth molecule is more complex than the first molecule. The function create_dataset we introduce below pulls random pairs of smiles strings out of a given list and ranks them according to this complexity measure.
In the real world you could use purchase cost, or number of reaction steps required as your complexity score.
End of explanation
# Lets split our dataset into a train set and a test set
molecule_ds = dc.data.NumpyDataset(np.array(molecules))
splitter = dc.splits.RandomSplitter()
train_mols, test_mols = splitter.train_test_split(molecule_ds)
Explanation: With our complexity ranker in place we can now construct our dataset. Let's start by loading the molecules in the Tox21 dataset into memory. We split the dataset at this stage to ensure that the training and test set have non-overlapping sets of molecules.
End of explanation
# In the paper they used 1024 bit fingerprints with chirality
n_features=1024
featurizer = dc.feat.CircularFingerprint(size=n_features, radius=2, chiral=True)
train_features = featurizer.featurize(train_mols.X)
train_smileslen = [len(Chem.MolToSmiles(x)) for x in train_mols.X]
train_dataset = create_dataset(train_features, train_smileslen)
Explanation: We'll featurize all our molecules with the ECFP fingerprint with chirality (matching the source paper), and will then construct our pairwise dataset using the code from above.
End of explanation
from deepchem.models import ScScoreModel
# Now to create the model and train it
model = ScScoreModel(n_features=n_features)
model.fit(train_dataset, nb_epoch=20)
Explanation: Now that we have our dataset created, let's train a ScScoreModel on this dataset.
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
mol_scores = model.predict_mols(test_mols.X)
smiles_lengths = [len(Chem.MolToSmiles(x)) for x in test_mols.X]
Explanation: Model Performance
Lets evaluate how well the model does on our holdout molecules. The SaScores should track the length of SMILES strings from never before seen molecules.
End of explanation
plt.figure(figsize=(20,16))
plt.scatter(smiles_lengths, mol_scores)
plt.xlim(0,80)
plt.xlabel("SMILES length")
plt.ylabel("ScScore")
plt.show()
Explanation: Let's now plot the length of the smiles string of the molecule against the SaScore using matplotlib.
End of explanation |
13,937 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
First a non-learning test
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"? | Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
NUM_THREADS = 1
LOOKBACK = -1 # 252*4 + 28
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_train_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
n_levels=10)
agents = [Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=20,
name='Agent_{}'.format(i)) for i in index]
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
Explanation: In this notebook a simple Q learner will be trained and evaluated. The Q learner recommends when to buy or sell shares of one particular stock, and in which quantity (in fact it determines the desired fraction of shares in the total portfolio value). One initial attempt was made to train the Q-learner with multiple processes, but it was unsuccessful.
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 7
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
Explanation: Let's show the symbols data, to see how good the recommender has to be.
End of explanation
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: Let's run the trained agent, with the test set
First a non-learning test: this scenario would be worse than what is possible (in fact, the q-learner can learn from past samples in the test set without compromising the causality).
End of explanation
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
Explanation: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
End of explanation
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
import pickle
with open('../../data/dyna_10000_states_full_training.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
Explanation: What are the metrics for "holding the position"?
End of explanation |
13,938 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Corpus similarity
The goal of this notebook is to compare the two corpuses -- the final and the homework, to find some sort of difference between the two
Step1: Combined Clustering
Step2: Prediction of group | Python Code:
# Necessary imports
import os
import time
from nbminer.notebook_miner import NotebookMiner
from nbminer.cells.cells import Cell
from nbminer.features.features import Features
from nbminer.stats.summary import Summary
from nbminer.stats.multiple_summary import MultipleSummary
from nbminer.encoders.ast_graph.ast_graph import *
# Loading in the two corpuses
notebooks = [os.path.join('../hw_corpus', fname) for fname in os.listdir('../hw_corpus')]
hw_notebook_objs = [NotebookMiner(file) for file in notebooks]
people = os.listdir('../testbed/Final')
notebooks = []
for person in people:
person = os.path.join('../testbed/Final', person)
if os.path.isdir(person):
direc = os.listdir(person)
notebooks.extend([os.path.join(person, filename) for filename in direc if filename.endswith('.ipynb')])
notebook_objs = [NotebookMiner(file) for file in notebooks]
from nbminer.stats.multiple_summary import MultipleSummary
hw_summary = MultipleSummary(hw_notebook_objs)
final_summary = MultipleSummary(notebook_objs)
print("Number of Final notebooks: ", len(final_summary.summary_vec))
print("Number of Homework notebooks: ", len(hw_summary.summary_vec))
print("Average number of cells, Final: ", final_summary.average_number_of_cells())
print("Average number of cells, Homework: ", hw_summary.average_number_of_cells())
print("Average lines of code, Final: ", final_summary.average_lines_of_code())
print("Average lines of code, Homework: ", hw_summary.average_lines_of_code())
Explanation: Corpus similarity
The goal of this notebook is to compare the two corpuses -- the final and the homework, to find some sort of difference between the two
End of explanation
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_ast_features import GetASTFeatures
from nbminer.preprocess.get_imports import GetImports
from nbminer.preprocess.resample_by_node import ResampleByNode
from nbminer.encoders.ast_graph.ast_graph import ASTGraphReducer
from nbminer.preprocess.feature_encoding import FeatureEncoding
from nbminer.encoders.cluster.kmeans_encoder import KmeansEncoder
from nbminer.results.reconstruction_error.astor_error import AstorError
from nbminer.results.similarity.jaccard_similarity import NotebookJaccardSimilarity
a = Features(hw_notebook_objs, 'group_1')
a.add_notebooks(notebook_objs, 'group_2')
gastf = GetASTFeatures()
rbn = ResampleByNode()
gi = GetImports()
fe = FeatureEncoding()
ke = KmeansEncoder(n_clusters = 100)
#agr = ASTGraphReducer(a, threshold=20, split_call=False)
njs = NotebookJaccardSimilarity()
pipe = Pipeline([gastf, rbn, gi, fe, ke, njs])
a = pipe.transform(a)
import numpy as np
intra, inter = njs.group_average_jaccard_similarity('group_1')
print('Mean within group: ', np.mean(np.array(intra)))
print('STD within group: ', np.std(np.array(intra)))
print('Mean outside group: ', np.mean(np.array(inter)))
print('STD outside group: ', np.std(np.array(inter)))
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_ast_features import GetASTFeatures
from nbminer.preprocess.get_imports import GetImports
from nbminer.preprocess.resample_by_node import ResampleByNode
from nbminer.encoders.ast_graph.ast_graph import ASTGraphReducer
from nbminer.preprocess.feature_encoding import FeatureEncoding
from nbminer.encoders.cluster.kmeans_encoder import KmeansEncoder
from nbminer.results.reconstruction_error.astor_error import AstorError
from nbminer.results.similarity.jaccard_similarity import NotebookJaccardSimilarity
a = Features(hw_notebook_objs, 'group_1')
a.add_notebooks(notebook_objs, 'group_2')
gastf = GetASTFeatures()
rbn = ResampleByNode()
gi = GetImports()
fe = FeatureEncoding()
ke = KmeansEncoder(n_clusters = 10)
#agr = ASTGraphReducer(a, threshold=20, split_call=False)
njs = NotebookJaccardSimilarity()
pipe = Pipeline([gastf, rbn, gi, fe, ke, njs])
a = pipe.transform(a)
Explanation: Combined Clustering
End of explanation
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_ast_features import GetASTFeatures
from nbminer.preprocess.get_imports import GetImports
from nbminer.preprocess.resample_by_node import ResampleByNode
from nbminer.encoders.ast_graph.ast_graph import ASTGraphReducer
from nbminer.preprocess.feature_encoding import FeatureEncoding
from nbminer.encoders.cluster.kmeans_encoder import KmeansEncoder
from nbminer.results.similarity.jaccard_similarity import NotebookJaccardSimilarity
from nbminer.results.prediction.corpus_identifier import CorpusIdentifier
a = Features(hw_notebook_objs, 'group_1')
a.add_notebooks(notebook_objs, 'group_2')
gastf = GetASTFeatures()
rbn = ResampleByNode()
gi = GetImports()
fe = FeatureEncoding()
ke = KmeansEncoder(n_clusters = 10)
#agr = ASTGraphReducer(a, threshold=20, split_call=False)
ci = CorpusIdentifier()
pipe = Pipeline([gastf, rbn, gi, fe, ke, ci])
a = pipe.transform(a)
%matplotlib inline
import matplotlib.pyplot as plt
fpr, tpr, m = ci.predict()
print(m)
plt.plot(fpr, tpr)
from nbminer.pipeline.pipeline import Pipeline
from nbminer.features.features import Features
from nbminer.preprocess.get_simple_features import GetSimpleFeatures
from nbminer.results.prediction.corpus_identifier import CorpusIdentifier
a = Features(hw_notebook_objs, 'group_1')
a.add_notebooks(notebook_objs, 'group_2')
gsf = GetSimpleFeatures()
ci = CorpusIdentifier(feature_name='string')
pipe = Pipeline([gsf, ci])
a = pipe.transform(a)
%matplotlib inline
import matplotlib.pyplot as plt
fpr, tpr, m = ci.predict()
print(m)
plt.plot(fpr, tpr)
Explanation: Prediction of group
End of explanation |
13,939 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Model27
Step1: KMeans
Step2: B. Modeling
Step3: Original
=== Bench with ElasticNetCV | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from utils import load_buzz, select, write_result
from features import featurize, get_pos
from containers import Questions, Users, Categories
from nlp import extract_entities
Explanation: Model27: We are at the last chance!
End of explanation
import pickle
questions = pickle.load(open('questions01.pkl', 'rb'))
users = pickle.load(open('users01.pkl', 'rb'))
categories = pickle.load(open('categories01.pkl', 'rb'))
load_buzz()['train'][1]
questions[1]
set(users[0].keys()) - set(['cat_uid'])
from sklearn.preprocessing import normalize
wanted_user_items = list(set(users[0].keys()) - set(['cat_uid']))
X_pos_uid = users.select(wanted_user_items)
unwanted_q_items = ['answer', 'category', 'group', 'ne_tags', 'question', 'pos_token', 'cat_qid']
wanted_q_items = list(set(questions[1].keys() - set(unwanted_q_items)))
X_pos_qid = questions.select(wanted_q_items)
X_pos_uid = normalize(X_pos_uid, norm='l1')
X_pos_qid = normalize(X_pos_qid, norm='l1')
print(X_pos_qid[0])
print(X_pos_uid[0])
from sklearn.cluster import KMeans
# Question category
n_components = 27
est = KMeans(n_clusters=n_components)
est.fit(X_pos_qid)
pred_cat_qid = est.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 27
est = KMeans(n_clusters=n_components)
est.fit(X_pos_uid)
pred_cat_uid = est.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', {key: str(pred_cat_uid[i]) for i, key in enumerate(users.keys())})
questions.sub_append('cat_qid', {key: str(pred_cat_qid[i]) for i, key in enumerate(questions.keys())})
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
Explanation: KMeans
End of explanation
def add_features(X):
for item in X:
# category
for key in categories[item['category']].keys():
item[key] = categories[item['category']][key]
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item.update(users[uid])
else:
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item.update(questions[qid])
import pickle
questions = pickle.load(open('questions01.pkl', 'rb'))
users = pickle.load(open('users01.pkl', 'rb'))
categories = pickle.load(open('categories01.pkl', 'rb'))
from utils import load_buzz, select, write_result
from features import featurize, get_pos
from containers import Questions, Users, Categories
from nlp import extract_entities
import math
from collections import Counter
from numpy import abs, sqrt
from sklearn.linear_model import ElasticNetCV
from sklearn.cross_validation import ShuffleSplit, cross_val_score
from sklearn.feature_extraction import DictVectorizer
from sklearn.preprocessing import normalize
from sklearn.svm import LinearSVC
from sklearn.cluster import KMeans
wanted_user_items = list(set(users[0].keys()) - set(['cat_uid']))
X_pos_uid = users.select(wanted_user_items)
unwanted_q_items = ['answer', 'category', 'group', 'ne_tags', 'question', 'pos_token', 'cat_qid']
wanted_q_items = list(set(questions[1].keys() - set(unwanted_q_items)))
X_pos_qid = questions.select(wanted_q_items)
X_pos_uid = normalize(X_pos_uid, norm='l1')
X_pos_qid = normalize(X_pos_qid, norm='l1')
tu = ('l1', 'n_uid_clust', 'n_qid_clust', 'rmse')
print ('=== Bench with ElasticNetCV: {0}, {1}, {2}, {3}'.format(*tu))
for ii in [27]:
n_uid_clu = ii
n_qid_clu = ii
# clustering for uid
uid_est = KMeans(n_clusters=n_uid_clu)
uid_est.fit(X_pos_uid)
pred_cat_uid = uid_est.predict(X_pos_uid)
# clustering for qid
qid_est = KMeans(n_clusters=n_qid_clu)
qid_est.fit(X_pos_qid)
pred_cat_qid = qid_est.predict(X_pos_qid)
users.sub_append('cat_uid', {key: str(pred_cat_uid[i]) for i, key in enumerate(users.keys())})
questions.sub_append('cat_qid', {key: str(pred_cat_qid[i]) for i, key in enumerate(questions.keys())})
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
X_train, y_train = featurize(load_buzz(), group='train',
sign_val=None, extra=['sign_val', 'avg_pos'])
add_features(X_train)
unwanted_features = ['ne_tags', 'pos_token', 'question', 'sign_val', 'group', 'q_acc_ratio_cat', 'q_ave_pos_cat']
wanted_features = list(set(X_train[1].keys()) - set(unwanted_features))
X_train = select(X_train, wanted_features)
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
X_new = X_train_dict_vec
#X_new = LinearSVC(C=0.01, penalty="l1", dual=False, random_state=50).fit_transform(X_train_dict_vec, y_train)
n_samples = X_new.shape[0]
cv = ShuffleSplit(n_samples, n_iter=5, test_size=0.2, random_state=50)
print("L1-based feature selection:", X_train_dict_vec.shape, X_new.shape)
for l1 in [0.7]:
scores = cross_val_score(ElasticNetCV(n_jobs=3, normalize=True, l1_ratio = l1),
X_new, y_train,
cv=cv, scoring='mean_squared_error')
rmse = sqrt(abs(scores)).mean()
print ('{0}, {1}, {2}, {3}'.format(l1, n_uid_clu, n_qid_clu, rmse))
X_train[125]
Explanation: B. Modeling
End of explanation
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
add_features(X_test)
X_test = select(X_test, wanted_features)
unwanted_features = ['ne_tags', 'pos_token', 'question', 'sign_val', 'group', 'q_acc_ratio_cat', 'q_ave_pos_cat']
wanted_features = list(set(X_train[1].keys()) - set(unwanted_features))
X_train = select(X_train, wanted_features)
X_train[0]
users[131]
categories['astronomy']
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
for l1_ratio in [0.7]:
print('=== l1_ratio:', l1_ratio)
regressor = ElasticNetCV(n_jobs=3, normalize=True, l1_ratio=l1_ratio, random_state=50)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
write_result(load_buzz()['test'], predictions, file_name=str(l1_ratio)+'guess_adj.csv', adj=True)
Explanation: Original
=== Bench with ElasticNetCV: l1, n_uid_clust, n_qid_clust, rmse
L1-based feature selection: (28494, 1112) (28494, 1112)
0.7, 27, 27, 74.88480204218828
Without users features for regression
=== Bench with ElasticNetCV: l1, n_uid_clust, n_qid_clust, rmse
L1-based feature selection: (28494, 1112) (28494, 1112)
0.7, 27, 27, 74.94733641570902
Training and testing model
End of explanation |
13,940 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
# Split all the text into words - All the possible words collections/choices
#words = text.split() # This is not the unique words
#words = {word: None for word in text.split()}
#words = token_lookup()
from collections import Counter
# Count the freq of words in the text/collection of words
word_counts = Counter(text)
# Having counted the frequency of the words in collection, sort them from most to least/top to bottom/descendng
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
# first enumerating for vocab to int
vocab_to_int = {words: ii for ii, words in enumerate(sorted_vocab)}
# into_to_vocab after enumerating through the sorted vocab
int_to_vocab = {ii: words for words, ii in vocab_to_int.items()}
# return the output results: a tuple of dicts(vocab_to_int, int_to_vocab)
# return dicts(vocab_to_int, int_to_vocab)
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
# Replace punctuation with tokens so we can use them in our model
token_dict = {'.': '||Period||',
',': '||Comma||',
'"': '||Quotation_Mark||',
';': '||Semicolon||',
'!': '||Exclamation_Mark||',
'?': '||Question_Mark||',
'(': '||Left_Parentheses||',
')': '||Right_Parentheses||',
'--': '||Dash||',
'\n': '||Return||'}
#token_dict.items() # to show it
return token_dict
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
input = tf.placeholder(dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder(dtype=tf.int32, shape=[None, None], name='targets')
lr = tf.placeholder(dtype=tf.float32, shape=None, name='learning_rate')
return input, targets, lr
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following the tuple (Input, Targets, LearingRate)
End of explanation
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
# The number of LSTM cells/ Memory cells in one layer for RNN
rnn = tf.contrib.rnn.BasicLSTMCell(rnn_size) # rnn_size==LSTM_size??
# # Adding Dropout NOT needed/ Not Asked
# keep_prob = 1.0 # Drop out probability
# drop = tf.contrib.rnn.DropoutWrapper(rnn, keep_prob) #output_keep_prop=
# Stacking up multiple LSTM layers for DL
rnn_layers = 1 # layers
cell = tf.contrib.rnn.MultiRNNCell([rnn] * rnn_layers)
# Initializing the cell state using zero_state()
initial_state = cell.zero_state(batch_size, tf.float32)
initial_state = tf.identity(input=initial_state, name='initial_state')
return cell, initial_state
# Aras: Already implemented in sentiment network
# lstm_size = 256
# lstm_layers = 1
# batch_size = 500
# learning_rate = 0.001
# with graph.as_default():
# # Your basic LSTM cell
# lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# # Add dropout to the cell
# drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# # Stack up multiple LSTM layers, for deep learning
# cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# # Getting an initial state of all zerosoutput_keep_prop
# initial_state = cell.zero_state(batch_size, tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
initial_state
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
# Size of embedding vectors (number of units in the emdding layer)
# embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embedding = tf.Variable(tf.random_uniform(shape=[vocab_size, embed_dim], minval=-1, maxval=1, dtype=tf.float32,
seed=None, name=None))
# tf.random_normal(mean=1.0, size/shape=[], stddev=0.1)
# tf.random_normal(shape=[vocab_size/n_words, embed_size/embed_dim], mean=0.0, stddev=1.0,
#dtype=tf.float32, seed=None, name=None)
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
# # Embedding implementation from Sentiment_RNN_solution.ipynb
# # Size of the embedding vectors (number of units in the embedding layer)
# embed_size = 300
# with graph.as_default():
# embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
# embed = tf.nn.embedding_lookup(embedding, inputs_)tf.nn.dynamic_rnn
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
# Create the RNN using the cells and the embedded input vectors
# outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=cell.state)
# initial_state=["initial_state"]
outputs, final_state = tf.nn.dynamic_rnn(cell=cell, inputs=inputs,
sequence_length=None,
initial_state=None,
dtype=tf.float32, parallel_iterations=None,
swap_memory=False, time_major=False, scope=None)
# Naming the final_state using tf.identity(input, name)
final_state = tf.identity(input=final_state, name='final_state')
# Returning the outputs and the final_state
return outputs, final_state
# Aras: Implementation from Sentiment_RNN_Solution.ipynb
# with graph.as_default():
# outputs, final_state = tf.nn.dynamic_rnn(cell, embed,
# initial_state=initial_state)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
# embedding layer: def get_embed(input_data, vocab_size, embed_dim):
embed = get_embed(input_data=input_data, vocab_size=vocab_size, embed_dim=rnn_size)
# build rnn: def build_rnn(cell, inputs):
outputs, final_state = build_rnn(cell=cell, inputs=embed)
# Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
logits = tf.contrib.layers.fully_connected(inputs=outputs, num_outputs=vocab_size, activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
# calculating the batch length, i.e. number words in a batch
batch_length = batch_size*seq_length # Remeber: batch_length != batch_size
# Number of bacthes in the give text with word IDs
num_batches = len(int_text)// batch_length
if (len(int_text)//batch_length) == (len(int_text)/batch_length):
num_batches -= 1
# preparing the numpy array first which is going to be returned/outputed
batches = np.zeros([num_batches, 2, batch_size, seq_length])
# number of words in the text (out dataset)
# get rid of the rest of the text which can fully be included in a batch based on the batch size
int_text = int_text[:(num_batches*batch_length)+1] # incremented one for the IO sequences/seq2seq learning
# Now based on the txt_size, batch_size, and seq_size/length, we should start getting the batches stochastically
#for batch_index/b_idx in range(start=0, stop=len(int_text), step=batch_size):
for batch_idx in range(0, num_batches, 1):
batch_slice = int_text[batch_idx*batch_length:(batch_idx+1)*batch_length+1]
# Slicing up the sequences inside a batch
#for seq_index/s_idx in range(start=0, stop=len(batch[??]), step=seq_length): # remember each sequence has two seq: input & output
for seq_idx in range(0, batch_size, 1):
batches[batch_idx, 0, seq_idx] = batch_slice[seq_idx*seq_length:(seq_idx+1)*seq_length]
batches[batch_idx, 1, seq_idx] = batch_slice[seq_idx*seq_length+1:((seq_idx+1)*seq_length)+1]
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 100 # depends on how fast the system is and how long we can wait to see the results
# Batch Size
batch_size = 64 # depends on the memory, num seq per batch
# RNN Size
rnn_size = 128 # Pixel and int/8 Bit
# Sequence Length
seq_length = 64 # the same as RNN width size/ number of mem cells in one layer
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 16 # 2^4 show every 16 batches learning/training
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
input = loaded_graph.get_tensor_by_name(name='input:0')
initial_state = loaded_graph.get_tensor_by_name(name='initial_state:0')
final_state = loaded_graph.get_tensor_by_name(name='final_state:0')
probs = loaded_graph.get_tensor_by_name(name='probs:0')
return input, initial_state, final_state, probs
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
# extractin the string/words out of the int_to_vocab.items
words = np.array([words for ids, words in int_to_vocab.items()])
# The generated random samples = numpy.random.choice(a, size=None, replace=True, p=None)¶
random_word = np.random.choice(a = words, size=None, replace=True, p=probabilities)
return random_word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,941 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Canonical Correlation Analysis (CCA)
Example is taken from Section 12.5.3, Machine Learning
Step1: Set up shapes, variables and constants
We have two observed variables x and y of shapes (D_x,1) and (D_y,1) and the latent variables z_s, z_x, z_y of shapes (L_o,1), (L_x,1) and (L_y,1).
Step2: Define the model
Step3: Obtain joint distribution p(x,y) | Python Code:
from symgp import *
from sympy import *
from IPython.display import display, Math, Latex
Explanation: Canonical Correlation Analysis (CCA)
Example is taken from Section 12.5.3, Machine Learning: A Probabilistic Perspective by Kevin Murphy.
End of explanation
# Shapes
D_x, D_y, L_o, L_x, L_y = symbols('D_x, D_y, L_o L_x L_y')
# Variables
x, y, z_s, z_x, z_y = utils.variables('x y z_{s} z_{x} z_{y}', [D_x, D_y, L_o, L_x, L_y])
# Constants
B_x, W_x, mu_x, B_y, W_y, mu_y = utils.constants('B_{x} W_{x} mu_{x} B_{y} W_{y} mu_{y}',
[(D_x,L_x), (D_x,L_o), D_x, (D_y,L_y), (D_y,L_o), D_y])
sig = symbols('\u03c3') # Noise standard deviation
Explanation: Set up shapes, variables and constants
We have two observed variables x and y of shapes (D_x,1) and (D_y,1) and the latent variables z_s, z_x, z_y of shapes (L_o,1), (L_x,1) and (L_y,1).
End of explanation
# p(z_s), p(z_x), p(z_y)
p_zs = MVG([z_s],mean=ZeroMatrix(L_o,1),cov=Identity(L_o))
p_zx = MVG([z_x],mean=ZeroMatrix(L_x,1),cov=Identity(L_x))
p_zy = MVG([z_y],mean=ZeroMatrix(L_y,1),cov=Identity(L_y))
display(Latex(utils.matLatex(p_zs)))
display(Latex(utils.matLatex(p_zx)))
display(Latex(utils.matLatex(p_zx)))
# p(z)
p_z = p_zs*p_zx*p_zy
display(Latex(utils.matLatex(p_z)))
# p(x|z)
p_x_g_z = MVG([x],mean=B_x*z_x + W_x*z_s + mu_x,cov=sig**2*Identity(D_x),cond_vars=[z_x,z_s])
display(Latex(utils.matLatex(p_x_g_z)))
# p(y|z)
p_y_g_z = MVG([y],mean=B_y*z_y + W_y*z_s + mu_y,cov=sig**2*Identity(D_y),cond_vars=[z_y,z_s])
display(Latex(utils.matLatex(p_y_g_z)))
Explanation: Define the model
End of explanation
# p(v|z) (p(x,y|z_s,z_x,z_y)) We denote v = (x;y) and z = (z_s;z_x;z_y)
p_v_g_z = p_x_g_z*p_y_g_z
display(Latex(utils.matLatex(p_v_g_z)))
# p(v,z) (p(x,y,z_s,z_x,z_y))
p_v_z = p_v_g_z*p_z
display(Latex(utils.matLatex(p_v_z)))
# p(v) (p(x,y))
p_v = p_v_z.marginalise([z_s,z_x,z_y])
display(Latex(utils.matLatex(p_v)))
Explanation: Obtain joint distribution p(x,y)
End of explanation |
13,942 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http
Step1: Authenticate to Earth Engine
This should be the same account you used to login to Cloud previously.
Step2: Test the TensorFlow installation
Step3: Test the Folium installation
Step4: Define variables
Step5: Generate training data
This is a multi-step process. First, export the image that contains the prediction bands. When that export completes (several hours in this example), it can be reloaded and sampled to generate training and testing datasets. The second step is to export the traning and testing tables to TFRecord files in Cloud Storage (also several hours).
Step6: First, export the image stack that contains the predictors.
Step7: Wait until the image export is completed, then sample the exported image.
Step8: Export the training and testing tables. This also takes a few hours.
Step11: Parse the exported datasets
Now we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized Example proto (i.e. example.proto) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. (Learn more about parsing Example protocol buffer messages).
Step12: Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands for keys and the numeric value of the bands for values. The second element of the tuple is the class label, which in this case is an indicator variable that is one if deforestation happened, zero otherwise.
Create the Keras model
This model is intended to represent traditional logistic regression, the parameters of which are estimated through maximum likelihood. Specifically, the probability of an event is represented as the sigmoid of a linear function of the predictors. Training or fitting the model consists of finding the parameters of the linear function that maximize the likelihood function. This is implemented in Keras by defining a model with a single trainable layer, a sigmoid activation on the output, and a crossentropy loss function. Note that the only trainable layer is convolutional, with a 1x1 kernel, so that Earth Engine can apply the model in each pixel. To fit the model, a Stochastic Gradient Descent (SGD) optimizer is used. This differs somewhat from traditional fitting of logistic regression models in that stocahsticity is introduced by using mini-batches to estimate the gradient.
Step13: Save the trained model
Save the trained model to tf.saved_model format in your cloud storage bucket.
Step14: EEification
The first part of the code is just to get (and SET) input and output names. Keep the input name of 'array', which is how you'll pass data into the model (as an array image).
Step15: Run the EEifier
Use the command line to set your Cloud project and then run the eeifier.
Step16: Deploy and host the EEified model on AI Platform
If you change anything about the model, you'll need to re-EEify it and create a new version!
Step17: Connect to the hosted model from Earth Engine
Now that the model is hosted on AI Platform, point Earth Engine to it and make predictions. These predictions can be thresholded for a rudimentary deforestation detector. Visualize the after imagery, the reference data and the predictions. | Python Code:
from google.colab import auth
auth.authenticate_user()
Explanation: <table class="ee-notebook-buttons" align="left"><td>
<a target="_blank" href="http://colab.research.google.com/github/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_logistic_regression.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/google/earthengine-api/blob/master/python/examples/ipynb/Earth_Engine_TensorFlow_logistic_regression.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td></table>
Introduction
Logistic regression
Logistic regression is a classical machine learning method to estimate the probability of an event occurring (sometimes called the "risk"). Specifically, the probability is modeled as a sigmoid function of a linear combination of inputs. This can be implemented as a very simple neural network with a single trainable layer.
Here, the event being modeled is deforestation in 2016. If a pixel is labeled as deforesetation in 2016 according to the Hansen Global Forest Change dataset, the event occurred with probability 1. The probability is zero otherwise. The input variables (i.e. the predictors of this event) are the pixel values of two Landsat 8 surface reflectance median composites, from 2015 and 2017, assumed to represent before and after conditions.
The model will be hosted on Google AI Platform and used in Earth Engine for interactive prediction from an ee.Model.fromAIPlatformPredictor. See this example notebook for background on hosted models.
Running this demo may incur charges to your Google Cloud Account!
Setup software libraries
Import software libraries and/or authenticate as necessary.
Authenticate to Colab and Cloud
This should be the same account you use to login to Earth Engine.
End of explanation
import ee
ee.Authenticate()
ee.Initialize()
Explanation: Authenticate to Earth Engine
This should be the same account you used to login to Cloud previously.
End of explanation
import tensorflow as tf
print(tf.__version__)
Explanation: Test the TensorFlow installation
End of explanation
import folium
print(folium.__version__)
Explanation: Test the Folium installation
End of explanation
# REPLACE WITH YOUR CLOUD PROJECT!
PROJECT = 'your-project'
# Output bucket for trained models. You must be able to write into this bucket.
OUTPUT_BUCKET = 'your-bucket'
# Cloud Storage bucket with training and testing datasets.
DATA_BUCKET = 'ee-docs-demos'
# This is a good region for hosting AI models.
REGION = 'us-central1'
# Training and testing dataset file names in the Cloud Storage bucket.
TRAIN_FILE_PREFIX = 'logistic_demo_training'
TEST_FILE_PREFIX = 'logistic_demo_testing'
file_extension = '.tfrecord.gz'
TRAIN_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TRAIN_FILE_PREFIX + file_extension
TEST_FILE_PATH = 'gs://' + DATA_BUCKET + '/' + TEST_FILE_PREFIX + file_extension
# The labels, consecutive integer indices starting from zero, are stored in
# this property, set on each point.
LABEL = 'loss16'
# Number of label values, i.e. number of classes in the classification.
N_CLASSES = 3
# Use Landsat 8 surface reflectance data for predictors.
L8SR = ee.ImageCollection('LANDSAT/LC08/C01/T1_SR')
# Use these bands for prediction.
OPTICAL_BANDS = ['B1', 'B2', 'B3', 'B4', 'B5', 'B6', 'B7']
THERMAL_BANDS = ['B10', 'B11']
BEFORE_BANDS = OPTICAL_BANDS + THERMAL_BANDS
AFTER_BANDS = [str(s) + '_1' for s in BEFORE_BANDS]
BANDS = BEFORE_BANDS + AFTER_BANDS
# Forest loss in 2016 is what we want to predict.
IMAGE = ee.Image('UMD/hansen/global_forest_change_2018_v1_6')
LOSS16 = IMAGE.select('lossyear').eq(16).rename(LABEL)
# Study area. Mostly Brazil.
GEOMETRY = ee.Geometry.Polygon(
[[[-71.96531166607349, 0.24565390557980268],
[-71.96531166607349, -17.07400853625319],
[-40.32468666607349, -17.07400853625319],
[-40.32468666607349, 0.24565390557980268]]], None, False)
# These names are used to specify properties in the export of training/testing
# data and to define the mapping between names and data when reading from
# the TFRecord file into a tf.data.Dataset.
FEATURE_NAMES = list(BANDS)
FEATURE_NAMES.append(LABEL)
# List of fixed-length features, all of which are float32.
columns = [
tf.io.FixedLenFeature(shape=[1], dtype=tf.float32) for k in FEATURE_NAMES
]
# Dictionary with feature names as keys, fixed-length features as values.
FEATURES_DICT = dict(zip(FEATURE_NAMES, columns))
# Where to save the trained model.
MODEL_DIR = 'gs://' + OUTPUT_BUCKET + '/logistic_demo_model'
# Where to save the EEified model.
EEIFIED_DIR = 'gs://' + OUTPUT_BUCKET + '/logistic_demo_eeified'
# Name of the AI Platform model to be hosted.
MODEL_NAME = 'logistic_demo_model'
# Version of the AI Platform model to be hosted.
VERSION_NAME = 'v0'
Explanation: Define variables
End of explanation
# Cloud masking function.
def maskL8sr(image):
cloudShadowBitMask = ee.Number(2).pow(3).int()
cloudsBitMask = ee.Number(2).pow(5).int()
qa = image.select('pixel_qa')
mask1 = qa.bitwiseAnd(cloudShadowBitMask).eq(0).And(
qa.bitwiseAnd(cloudsBitMask).eq(0))
mask2 = image.mask().reduce('min')
mask3 = image.select(OPTICAL_BANDS).gt(0).And(
image.select(OPTICAL_BANDS).lt(10000)).reduce('min')
mask = mask1.And(mask2).And(mask3)
return image.select(OPTICAL_BANDS).divide(10000).addBands(
image.select(THERMAL_BANDS).divide(10).clamp(273.15, 373.15)
.subtract(273.15).divide(100)).updateMask(mask)
# Make "before" and "after" composites.
composite1 = L8SR.filterDate(
'2015-01-01', '2016-01-01').map(maskL8sr).median()
composite2 = L8SR.filterDate(
'2016-12-31', '2017-12-31').map(maskL8sr).median()
stack = composite1.addBands(composite2).float()
export_image = 'projects/google/logistic_demo_image'
image_task = ee.batch.Export.image.toAsset(
image = stack,
description = 'logistic_demo_image',
assetId = export_image,
region = GEOMETRY,
scale = 30,
maxPixels = 1e10
)
Explanation: Generate training data
This is a multi-step process. First, export the image that contains the prediction bands. When that export completes (several hours in this example), it can be reloaded and sampled to generate training and testing datasets. The second step is to export the traning and testing tables to TFRecord files in Cloud Storage (also several hours).
End of explanation
image_task.start()
Explanation: First, export the image stack that contains the predictors.
End of explanation
sample = ee.Image(export_image).addBands(LOSS16).stratifiedSample(
numPoints = 10000,
classBand = LABEL,
region = GEOMETRY,
scale = 30,
tileScale = 8
)
randomized = sample.randomColumn()
training = randomized.filter(ee.Filter.lt('random', 0.7))
testing = randomized.filter(ee.Filter.gte('random', 0.7))
train_task = ee.batch.Export.table.toCloudStorage(
collection = training,
description = TRAIN_FILE_PREFIX,
bucket = OUTPUT_BUCKET,
fileFormat = 'TFRecord'
)
test_task = ee.batch.Export.table.toCloudStorage(
collection = testing,
description = TEST_FILE_PREFIX,
bucket = OUTPUT_BUCKET,
fileFormat = 'TFRecord'
)
Explanation: Wait until the image export is completed, then sample the exported image.
End of explanation
train_task.start()
test_task.start()
Explanation: Export the training and testing tables. This also takes a few hours.
End of explanation
def parse_tfrecord(example_proto):
The parsing function.
Read a serialized example into the structure defined by FEATURES_DICT.
Args:
example_proto: a serialized Example.
Returns:
A tuple of the predictors dictionary and the label, cast to an `int32`.
parsed_features = tf.io.parse_single_example(example_proto, FEATURES_DICT)
labels = parsed_features.pop(LABEL)
return parsed_features, tf.cast(labels, tf.int32)
def to_tuple(inputs, label):
Convert inputs to a tuple.
Note that the inputs must be a tuple of tensors in the right shape.
Args:
dict: a dictionary of tensors keyed by input name.
label: a tensor storing the response variable.
Returns:
A tuple of tensors: (predictors, label).
# Values in the tensor are ordered by the list of predictors.
predictors = [inputs.get(k) for k in BANDS]
return (tf.expand_dims(tf.transpose(predictors), 1),
tf.expand_dims(tf.expand_dims(label, 1), 1))
# Load datasets from the files.
train_dataset = tf.data.TFRecordDataset(TRAIN_FILE_PATH, compression_type='GZIP')
test_dataset = tf.data.TFRecordDataset(TEST_FILE_PATH, compression_type='GZIP')
# Compute the size of the shuffle buffer. We can get away with this
# because it's a small dataset, but watch out with larger datasets.
train_size = 0
for _ in iter(train_dataset):
train_size+=1
batch_size = 8
# Map the functions over the datasets to parse and convert to tuples.
train_dataset = train_dataset.map(parse_tfrecord, num_parallel_calls=4)
train_dataset = train_dataset.map(to_tuple, num_parallel_calls=4)
train_dataset = train_dataset.shuffle(train_size).batch(batch_size)
test_dataset = test_dataset.map(parse_tfrecord, num_parallel_calls=4)
test_dataset = test_dataset.map(to_tuple, num_parallel_calls=4)
test_dataset = test_dataset.batch(batch_size)
# Print the first parsed record to check.
from pprint import pprint
pprint(iter(train_dataset).next())
Explanation: Parse the exported datasets
Now we need to make a parsing function for the data in the TFRecord files. The data comes in flattened 2D arrays per record and we want to use the first part of the array for input to the model and the last element of the array as the class label. The parsing function reads data from a serialized Example proto (i.e. example.proto) into a dictionary in which the keys are the feature names and the values are the tensors storing the value of the features for that example. (Learn more about parsing Example protocol buffer messages).
End of explanation
from tensorflow import keras
# Define the layers in the model.
model = tf.keras.models.Sequential([
tf.keras.layers.Input((1, 1, len(BANDS))),
tf.keras.layers.Conv2D(1, (1,1), activation='sigmoid')
])
# Compile the model with the specified loss function.
model.compile(optimizer=tf.keras.optimizers.SGD(momentum=0.9),
loss='binary_crossentropy',
metrics=['accuracy'])
# Fit the model to the training data.
model.fit(x=train_dataset,
epochs=20,
validation_data=test_dataset)
Explanation: Note that each record of the parsed dataset contains a tuple. The first element of the tuple is a dictionary with bands for keys and the numeric value of the bands for values. The second element of the tuple is the class label, which in this case is an indicator variable that is one if deforestation happened, zero otherwise.
Create the Keras model
This model is intended to represent traditional logistic regression, the parameters of which are estimated through maximum likelihood. Specifically, the probability of an event is represented as the sigmoid of a linear function of the predictors. Training or fitting the model consists of finding the parameters of the linear function that maximize the likelihood function. This is implemented in Keras by defining a model with a single trainable layer, a sigmoid activation on the output, and a crossentropy loss function. Note that the only trainable layer is convolutional, with a 1x1 kernel, so that Earth Engine can apply the model in each pixel. To fit the model, a Stochastic Gradient Descent (SGD) optimizer is used. This differs somewhat from traditional fitting of logistic regression models in that stocahsticity is introduced by using mini-batches to estimate the gradient.
End of explanation
model.save(MODEL_DIR, save_format='tf')
Explanation: Save the trained model
Save the trained model to tf.saved_model format in your cloud storage bucket.
End of explanation
from tensorflow.python.tools import saved_model_utils
meta_graph_def = saved_model_utils.get_meta_graph_def(MODEL_DIR, 'serve')
inputs = meta_graph_def.signature_def['serving_default'].inputs
outputs = meta_graph_def.signature_def['serving_default'].outputs
# Just get the first thing(s) from the serving signature def. i.e. this
# model only has a single input and a single output.
input_name = None
for k,v in inputs.items():
input_name = v.name
break
output_name = None
for k,v in outputs.items():
output_name = v.name
break
# Make a dictionary that maps Earth Engine outputs and inputs to
# AI Platform inputs and outputs, respectively.
import json
input_dict = "'" + json.dumps({input_name: "array"}) + "'"
output_dict = "'" + json.dumps({output_name: "output"}) + "'"
print(input_dict)
print(output_dict)
Explanation: EEification
The first part of the code is just to get (and SET) input and output names. Keep the input name of 'array', which is how you'll pass data into the model (as an array image).
End of explanation
!earthengine set_project {PROJECT}
!earthengine model prepare --source_dir {MODEL_DIR} --dest_dir {EEIFIED_DIR} --input {input_dict} --output {output_dict}
Explanation: Run the EEifier
Use the command line to set your Cloud project and then run the eeifier.
End of explanation
!gcloud ai-platform models create {MODEL_NAME} \
--project {PROJECT} \
--region {REGION}
!gcloud ai-platform versions create {VERSION_NAME} \
--project {PROJECT} \
--region {REGION} \
--model {MODEL_NAME} \
--origin {EEIFIED_DIR} \
--framework "TENSORFLOW" \
--runtime-version=2.3 \
--python-version=3.7
Explanation: Deploy and host the EEified model on AI Platform
If you change anything about the model, you'll need to re-EEify it and create a new version!
End of explanation
# Turn into an array image for input to the model.
array_image = stack.select(BANDS).float().toArray()
# Point to the model hosted on AI Platform. If you specified a region other
# than the default (us-central1) at model creation, specify it here.
model = ee.Model.fromAiPlatformPredictor(
projectName=PROJECT,
modelName=MODEL_NAME,
version=VERSION_NAME,
# Can be anything, but don't make it too big.
inputTileSize=[8, 8],
# Keep this the same as your training data.
proj=ee.Projection('EPSG:4326').atScale(30),
fixInputProj=True,
# Note the names here need to match what you specified in the
# output dictionary you passed to the EEifier.
outputBands={'output': {
'type': ee.PixelType.float(),
'dimensions': 1
}
},
)
# Output probability.
predictions = model.predictImage(array_image).arrayGet([0])
# Back-of-the-envelope decision rule.
predicted = predictions.gt(0.7).selfMask()
# Training data for comparison.
reference = LOSS16.selfMask()
# Get map IDs for display in folium.
probability_vis = {'min': 0, 'max': 1}
probability_mapid = predictions.getMapId(probability_vis)
predicted_vis = {'palette': 'red'}
predicted_mapid = predicted.getMapId(predicted_vis)
reference_vis = {'palette': 'orange'}
reference_mapid = reference.getMapId(reference_vis)
image_vis = {'bands': ['B4', 'B3', 'B2'], 'min': 0, 'max': 0.3}
image_mapid = composite2.getMapId(image_vis)
# Visualize the input imagery and the predictions.
map = folium.Map(location=[-9.1, -62.3], zoom_start=11)
folium.TileLayer(
tiles=image_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='image',
).add_to(map)
folium.TileLayer(
tiles=probability_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='probability',
).add_to(map)
folium.TileLayer(
tiles=predicted_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='predicted',
).add_to(map)
folium.TileLayer(
tiles=reference_mapid['tile_fetcher'].url_format,
attr='Map Data © <a href="https://earthengine.google.com/">Google Earth Engine</a>',
overlay=True,
name='reference',
).add_to(map)
map.add_child(folium.LayerControl())
map
Explanation: Connect to the hosted model from Earth Engine
Now that the model is hosted on AI Platform, point Earth Engine to it and make predictions. These predictions can be thresholded for a rudimentary deforestation detector. Visualize the after imagery, the reference data and the predictions.
End of explanation |
13,943 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Average Reward over time
Step1: Visualizing what the agent is seeing
Starting with the ray pointing all the way right, we have one row per ray in clockwise order.
The numbers for each ray are the following | Python Code:
g.plot_reward(smoothing=100)
Explanation: Average Reward over time
End of explanation
g.__class__ = KarpathyGame
np.set_printoptions(formatter={'float': (lambda x: '%.2f' % (x,))})
x = g.observe()
new_shape = (x[:-4].shape[0]//g.eye_observation_size, g.eye_observation_size)
print(x[:-4].reshape(new_shape))
print(x[-4:])
g.to_html()
Explanation: Visualizing what the agent is seeing
Starting with the ray pointing all the way right, we have one row per ray in clockwise order.
The numbers for each ray are the following:
- first three numbers are normalized distances to the closest visible (intersecting with the ray) object. If no object is visible then all of them are $1$. If there's many objects in sight, then only the closest one is visible. The numbers represent distance to friend, enemy and wall in order.
- the last two numbers represent the speed of moving object (x and y components). Speed of wall is ... zero.
Finally the last two numbers in the representation correspond to speed of the hero.
End of explanation |
13,944 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Introduction to Testing
Testing is an easy thing to understand but there is also an art to it as well; writing good tests often requires you to try to figure out what input(s) are most likely to break your program.
In addition to this, tests can serve different purposes as well
Step2: Okay so, this is where we need to put our ‘thinking hat’ on for a moment. The documentation for this function specifically states A and B are supposed to be numbers, so instead of wasting time breaking the code with obviously bad inputs (e.g strings) lets try to break with valid inputs. In other words
Step3: Now, we know that X/0 is a ZeroDivisionError, the question is what do we want the result to be? Do we want Python to raise the error? or would we prefer Python to do something else such as return a number or perhaps a string.
Remember that errors are not bad, if Python to throws an error when it gets zero as input that’s totally fine, and in this case I think I’m happy with the error. This means I have to write a test case that expects an error to be raised. We can do that like so…
Step4: Okay, next up we need to test for large numbers. When it came to small numbers we can easily work out the correct answer by hand, but for large sums that’s not so easy.
Your first instinct here might be to say "use a calculator" and while that’s true, that solution only works in this very specific case. What we actually want is a more general solution that can solve all sorts of problems.
It turns out that sometimes building code that can generate test cases is a lot easier that building the solver. In this particular example we can do just that...
Let's take a step back and ask ourselves what division actually is. The answer is basically the opposite of multiplication. And so, we can actually write test cases for our function by "reverse engineering" the problem. We know from math that the following is always true
Step6: Most of the time however, the code you want to test will not be so easily reversed engineered. So most of the time your tests are going to be hand-written. And because writing tests can be a bit tedious and time consuming you are going to want tests that are
Step7: This is actually a hard problem to solve efficiently but I don't care about that. Right now, I only care about testing. And this is function that is easy to test.
Test-Driven Development
Sometimes, software developers write tests before they actually write the solution to thier problem. This is called "test-driven development". The advantage of writing tests first is that it forces you to think about the problem in a different way. Instead of thinking about how to solve the problem we instead start out by thinking about the sorts of inputs that are difficult. Sometimes, that means we spot problems faster than we would have otherwise.
Okay, lets write some tests!
Step8: So we have some tests, if "False" gets printed that means the test failed. This is a good start. Notice how these tests are easy to write and understand. We can also quickly add tests by copy & paste plus some tweaks. On the downside, the output is not very informative; Why did a test fail? Here our only option is figure out what test failed and then re-run it, this time makeing a not of the value.
Step9: Okay, so if we want better test output, we should probably write some sort of 'test framework'. For example
Step11: Okay so know that we have spent a little bit of time working on a test framework we can (a) quickly write new tests and (b) we can also clearly see why a test failed.
In test-driven development once you have a small selection of tests you then try to write code that passes the tests. Let's do that now...
Step12: Okay, now that we have a solution, lets run the tests
Step13: One test has failed, can you spot what the problem is?
When you have a failing test, and you are not sure exactly what the problem is a good thing to do is to try to make the test as simple as possible. Let's try that now.
Step14: Okay we have simplified the test and it still fails. Lets make it even simpler!
Step15: And now its not really possible to make the test any simpler. But by now the problem should be easy to understand; it seems as if the first number we are looking for is 2, so if 1 is missing we fail.
Lets test that hypothesis by writing more tests!
Step16: If you like, you can try to fix the function for homework.
Anyway, we built our own test framework for this example. It turns out that we do not need to do that, Python has a lot of frameworks that have been built by other developers far smarter than myself. And so, instead of re-inventing the wheel we should probably just learn on of these frameworks.
There are several options, but in this lecture I shall cover 'doc testing'
Doctesting
You may remember docstrings, the text we put at the very start of a function. Well, write doctests all we have to do is add tests to our docstrings. Its honestly as simple as that. Here is the syntax for a doctest
Step19: By default if all your tests pass nothing will be printed, but should a doctest fail Python will give you all the juicy detail. Lets try it now
Step21: We ran doctests, but since the test past nothing happened. Alright, lets show you want happens on failure
Step23: As you can see, Python ran four tests and two of them failed. It turns out 20 + 2 does not equal 23 and bad_list (surprise surprise) it up to no good.
Overall, I'd recommend beginners use doctesting. Its fairly easy to use and it allows you to quickly type out basic tests for your functions.
As our final exerise for today lets convert our 'print_test_result' tests into doctests... | Python Code:
def divide(a, b):
"a, b are ints or floats. Returns a/b
return a / b
Explanation: Introduction to Testing
Testing is an easy thing to understand but there is also an art to it as well; writing good tests often requires you to try to figure out what input(s) are most likely to break your program.
In addition to this, tests can serve different purposes as well:
Testing for correctness
Testing for speed (benchmarking)
Testing for "Let's check I didn't fuck something up" (a.k.a 'regression testing')
...etc...
All of the above tests have their uses, but as a general rule of thumb a good test suite will include a range of inputs and multiple tests for each.
I would add a small caveat that if there is documentation for a function that says something like "does not work for strings" then although it is possible to write test code for strings what would be the point? The documentation makes it clear that these tests will fail. Instead of writing test code for situations the code was not designed to solve focus on 'realistic' test cases.
Alright, lets write a super simple function that divides A by B:
End of explanation
# Function here...
print (divide(10, 2) == 5.0)
[divide(10.0, 2.0) == 5.0, divide(10,2) == 5.0, divide(0, 1) == 0.0 ]
Explanation: Okay so, this is where we need to put our ‘thinking hat’ on for a moment. The documentation for this function specifically states A and B are supposed to be numbers, so instead of wasting time breaking the code with obviously bad inputs (e.g strings) lets try to break with valid inputs. In other words:
what are the possible integers/floats we can pass in where this function may break?
When dealing with numbers there are, I think, three basic tests that are almost always worth running:
Negative Numbers
Zero
Positive Numbers
And in addition to those tests we should also run tests for:
Small inputs (e.g 10/5)
Very large inputs (e.g 999342493249234234234234 / 234234244353452424 )
You may remember for example in lecture 21 as we tried to optimise our is_prime function we introduced some defects when working with small numbers.
Anyway, the point is these five basic cases will cover a lot of situations you may have with numbers. Obviously you should run several tests for each of these basic test cases. And in addition to the basic tests you should run more function specific tests too; for example, if I have a function that returns the factors of n then it would be wise to run a bunch of tests with prime numbers to check what happens there. You should also test highly composite numbers too (e.g 720, 1260). In regard to our division function a good additional test would be when the numerator is larger than the denominator and vice versa (e.g. try both 10/2 and 2/10). Zero is also a special case for division, but we have already listed it in the basic tests.
Okay, so lets write our first tests:
End of explanation
try:
divide(1, 0)
print(False) # note that if the above line of code yields a zeroDiv error, this line of code is not executed.
except ZeroDivisionError:
print(True) # Test pass, dividing by zero yields an error.
Explanation: Now, we know that X/0 is a ZeroDivisionError, the question is what do we want the result to be? Do we want Python to raise the error? or would we prefer Python to do something else such as return a number or perhaps a string.
Remember that errors are not bad, if Python to throws an error when it gets zero as input that’s totally fine, and in this case I think I’m happy with the error. This means I have to write a test case that expects an error to be raised. We can do that like so…
End of explanation
x = 30202020202020202022424354265674567456
y = 95334534534543543543545435543543545345
divide(y * y, y) == float(y)
divide(x * y, y) == float(x)
Explanation: Okay, next up we need to test for large numbers. When it came to small numbers we can easily work out the correct answer by hand, but for large sums that’s not so easy.
Your first instinct here might be to say "use a calculator" and while that’s true, that solution only works in this very specific case. What we actually want is a more general solution that can solve all sorts of problems.
It turns out that sometimes building code that can generate test cases is a lot easier that building the solver. In this particular example we can do just that...
Let's take a step back and ask ourselves what division actually is. The answer is basically the opposite of multiplication. And so, we can actually write test cases for our function by "reverse engineering" the problem. We know from math that the following is always true:
(y * y) / y = y
(x * y) / y = x
And so, so long as we have a function that multiplies correctly, we can be confident that our function is getting the right answer to complex division problems even though we do not know what the right answer is ourselves. In code:
End of explanation
def firstMissingPositive(nums):
Given an unsorted integer array (nums) finds the smallest missing positive integer.
for example:
[0,1,2,3] => returns 4, since 4 is the smallest missing number
[10,11,12] => returns 1
return 3
Explanation: Most of the time however, the code you want to test will not be so easily reversed engineered. So most of the time your tests are going to be hand-written. And because writing tests can be a bit tedious and time consuming you are going to want tests that are:
Fast to write and execute
Easy to understand, they should give clear feedback on what went wrong
Test most/all of the likely scenarios.
For these reasons, its often a good idea to write tests that follow a common format. Great tests are often tests that you can copy and paste, and change into a new test by changing a small handful of values.
To illustrate that, lets suppose I have the following code:
End of explanation
print(firstMissingPositive([1,2,3]) == 4)
print(firstMissingPositive([0,0,1]) == 2)
print(firstMissingPositive([1,2]) == 3)
Explanation: This is actually a hard problem to solve efficiently but I don't care about that. Right now, I only care about testing. And this is function that is easy to test.
Test-Driven Development
Sometimes, software developers write tests before they actually write the solution to thier problem. This is called "test-driven development". The advantage of writing tests first is that it forces you to think about the problem in a different way. Instead of thinking about how to solve the problem we instead start out by thinking about the sorts of inputs that are difficult. Sometimes, that means we spot problems faster than we would have otherwise.
Okay, lets write some tests!
End of explanation
print("Got:", firstMissingPositive([1,2,3]))
print("should be 4")
Explanation: So we have some tests, if "False" gets printed that means the test failed. This is a good start. Notice how these tests are easy to write and understand. We can also quickly add tests by copy & paste plus some tweaks. On the downside, the output is not very informative; Why did a test fail? Here our only option is figure out what test failed and then re-run it, this time makeing a not of the value.
End of explanation
## Test
def print_test_result(func, input_val, expected_val):
result = func(input_val)
if result == expected_val:
print(f"TEST PASSED")
else:
print(f"TEST FAILED (Got: {result} Expected: {expected_val}, Input was: {input_val})")
##### TESTS GO HERE #####
print_test_result(firstMissingPositive, [1,2,3], 4)
print_test_result(firstMissingPositive, [0,0,1], 2)
print_test_result(firstMissingPositive, [1,2], 3)
print_test_result(firstMissingPositive, [7,6,5,4,3,2], 1)
print_test_result(firstMissingPositive, [1,2,4], 3)
Explanation: Okay, so if we want better test output, we should probably write some sort of 'test framework'. For example:
End of explanation
def firstMissingPositive(nums):
Given an unsorted integer array (nums) finds the smallest missing positive integer.
for example:
[0,1,2,3] => returns 4, since 4 is the smallest missing number
[10,11,12] => returns 1
i = 1
while True:
i += 1
if i not in nums:
return i
Explanation: Okay so know that we have spent a little bit of time working on a test framework we can (a) quickly write new tests and (b) we can also clearly see why a test failed.
In test-driven development once you have a small selection of tests you then try to write code that passes the tests. Let's do that now...
End of explanation
print_test_result(firstMissingPositive, [1,2,3], 4)
print_test_result(firstMissingPositive, [0,0,1], 2)
print_test_result(firstMissingPositive, [1,2], 3)
print_test_result(firstMissingPositive, [7,6,5,4,3,2], 1)
print_test_result(firstMissingPositive, [1,2,4], 3)
Explanation: Okay, now that we have a solution, lets run the tests:
End of explanation
print_test_result(firstMissingPositive, [5,4,3,2], 1)
Explanation: One test has failed, can you spot what the problem is?
When you have a failing test, and you are not sure exactly what the problem is a good thing to do is to try to make the test as simple as possible. Let's try that now.
End of explanation
print_test_result(firstMissingPositive, [2], 1)
print_test_result(firstMissingPositive, [], 1)
Explanation: Okay we have simplified the test and it still fails. Lets make it even simpler!
End of explanation
print_test_result(firstMissingPositive, [1,2,3], 4)
print_test_result(firstMissingPositive, [1, 3], 2)
print_test_result(firstMissingPositive, [1], 2)
Explanation: And now its not really possible to make the test any simpler. But by now the problem should be easy to understand; it seems as if the first number we are looking for is 2, so if 1 is missing we fail.
Lets test that hypothesis by writing more tests!
End of explanation
def run_doctests():
import doctest
doctest.testmod()
Explanation: If you like, you can try to fix the function for homework.
Anyway, we built our own test framework for this example. It turns out that we do not need to do that, Python has a lot of frameworks that have been built by other developers far smarter than myself. And so, instead of re-inventing the wheel we should probably just learn on of these frameworks.
There are several options, but in this lecture I shall cover 'doc testing'
Doctesting
You may remember docstrings, the text we put at the very start of a function. Well, write doctests all we have to do is add tests to our docstrings. Its honestly as simple as that. Here is the syntax for a doctest:
>>> {function name} ( {function argument, if any} )
{expected result}
And then once you have done that, you'll need to copy & paste the code below to run the test:
End of explanation
def add(a, b):
>>> add(10, 10)
20
return a + b
run_doctests()
Explanation: By default if all your tests pass nothing will be printed, but should a doctest fail Python will give you all the juicy detail. Lets try it now:
End of explanation
def run_all_the_tests():
>>> 1 + 1
2
>>> print(True)
True
>>> 20 + 2
23
print("testing complete")
run_doctests()
Explanation: We ran doctests, but since the test past nothing happened. Alright, lets show you want happens on failure:
End of explanation
def firstMissingPositive_TESTS():
>>> firstMissingPositive([1,2,3])
4
>>> firstMissingPositive([0,0,1])
2
>>> firstMissingPositive([1,2])
3
>>> firstMissingPositive([1,2,4])
3
>>> firstMissingPositive([2])
1
pass
# Now we run the tests...
import doctest
doctest.run_docstring_examples(firstMissingPositive_TESTS, globals(), verbose=True)
Explanation: As you can see, Python ran four tests and two of them failed. It turns out 20 + 2 does not equal 23 and bad_list (surprise surprise) it up to no good.
Overall, I'd recommend beginners use doctesting. Its fairly easy to use and it allows you to quickly type out basic tests for your functions.
As our final exerise for today lets convert our 'print_test_result' tests into doctests...
End of explanation |
13,945 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Augmented Reality Markers (ar_markers)
Kevin J. Walchko, created 11 July 2017
We are not going to do augmented reality, but we are going to learn how the markers work and use it for robotics. This will also give you insite on how QR codes work. For our applications, we want to be able to mark real world objects and have our robot know what they are (think road signs or mile markers on the highway).
Objectives
detect AR markers in an image
understand how QR codes work
understand how Hamming codes work
References
Source artical for the overview of how the marker works
Hamming (7,4)
Setup
Step1: How Do Marker Work?
There are lots of different types of markers out there in the world. Some are free to use and some are protected by intellectual property rights. Markers that machines can read range from simple bar codes on food products that can be scaned to much more complex 2D and 3D markers. We are going to look at a simple but useful type of 2D marker shown below.
The approach implemented here uses a type of Hamming code with the possibility to correct errors. This error correction is particularly useful when the marker is small or blurred in the image. Also, the idea is to be able to decipher the code provided by the marker without having to rotate it because there is a known pattern. Once that’s done, it becomes easy to use the black and white squares to read the signature and, if necessary, correct the code if an error is found.
Hamming Code [7,4]
First let's take a little side step and understand how a 7 bit hamming code works. In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.
The marker shown above is a 5x5 grid marker with a Hamming code [7,4] to detect and help correct errors. This form of hamming code uses 7 bits with 4 bits of data and 3 bits of parity. This code is capable of correcting 1 error or bit flip.
A nice graphical interpretation is shown above. The idea is the data bits $d_1, d_2, d_3, d_4$ are covered by multiple parity bits giving redundancy. For example $d_1$ is covered by $p_1$ and $p_2$ while $d_2$ is covered by $p_1$ and $p_3$. This redundancy allows the code to correct for 1 error. Error can come from many sources
Step2: 1 Error
We are going to corrupt each bit one at a time and detect it ... we are not fixing it. Notice if you read the parity code from right to left [$bit_0$, $bit_1$, $bit_2$], you can determine which bit needs to get flipped so the message is correct again.
Step3: 2 Errors
We are going to corrupt each bit one at a time and detect it ... we are not fixing it. Notice now, we can't identify the incorrect bit, but we know something bad is happening. Notice corrupted bit 1 and 6 give the same parity check
Step4: Let's Try Markers Now | Python Code:
%matplotlib inline
from __future__ import print_function
from __future__ import division
import numpy as np
from matplotlib import pyplot as plt
import cv2
import time
# make sure you have installed the library with:
# pip install -U ar_markers
from ar_markers import detect_markers
Explanation: Augmented Reality Markers (ar_markers)
Kevin J. Walchko, created 11 July 2017
We are not going to do augmented reality, but we are going to learn how the markers work and use it for robotics. This will also give you insite on how QR codes work. For our applications, we want to be able to mark real world objects and have our robot know what they are (think road signs or mile markers on the highway).
Objectives
detect AR markers in an image
understand how QR codes work
understand how Hamming codes work
References
Source artical for the overview of how the marker works
Hamming (7,4)
Setup
End of explanation
def fix_binary(msg):
# now, use the modulas operator to ensure it is a binary number
ans = []
for val in msg:
ans.append(val%2)
return np.array(ans)
# encode a message
G = np.array([
[1,1,0,1],
[1,0,1,1],
[1,0,0,0],
[0,1,1,1],
[0,1,0,0],
[0,0,1,0],
[0,0,0,1]
])
# decode and encoded message
R = np.array([
[0, 0, 1, 0, 0, 0, 0],
[0, 0, 0, 0, 1, 0, 0],
[0, 0, 0, 0, 0, 1, 0],
[0, 0, 0, 0, 0, 0, 1],
])
# check parity
H = np.array([
[1, 0, 1, 0, 1, 0, 1],
[0, 1, 1, 0, 0, 1, 1],
[0, 0, 0, 1, 1, 1, 1],
])
# a 4 bit message we want to send
msg = np.array([1,0,1,1])
e = fix_binary(G.dot(msg))
print('encoded msg:', e)
parity_check = fix_binary(H.dot(e))
print('parity check:', parity_check)
decoded_msg = R.dot(e)
print('decoded message:', decoded_msg)
print('Does msg == decoded msg?', msg == decoded_msg)
Explanation: How Do Marker Work?
There are lots of different types of markers out there in the world. Some are free to use and some are protected by intellectual property rights. Markers that machines can read range from simple bar codes on food products that can be scaned to much more complex 2D and 3D markers. We are going to look at a simple but useful type of 2D marker shown below.
The approach implemented here uses a type of Hamming code with the possibility to correct errors. This error correction is particularly useful when the marker is small or blurred in the image. Also, the idea is to be able to decipher the code provided by the marker without having to rotate it because there is a known pattern. Once that’s done, it becomes easy to use the black and white squares to read the signature and, if necessary, correct the code if an error is found.
Hamming Code [7,4]
First let's take a little side step and understand how a 7 bit hamming code works. In coding theory, Hamming(7,4) is a linear error-correcting code that encodes four bits of data into seven bits by adding three parity bits. It is a member of a larger family of Hamming codes, but the term Hamming code often refers to this specific code that Richard W. Hamming introduced in 1950. At the time, Hamming worked at Bell Telephone Laboratories and was frustrated with the error-prone punched card reader, which is why he started working on error-correcting codes.
The marker shown above is a 5x5 grid marker with a Hamming code [7,4] to detect and help correct errors. This form of hamming code uses 7 bits with 4 bits of data and 3 bits of parity. This code is capable of correcting 1 error or bit flip.
A nice graphical interpretation is shown above. The idea is the data bits $d_1, d_2, d_3, d_4$ are covered by multiple parity bits giving redundancy. For example $d_1$ is covered by $p_1$ and $p_2$ while $d_2$ is covered by $p_1$ and $p_3$. This redundancy allows the code to correct for 1 error. Error can come from many sources:
the marker is damaged
a read error occurred due to:
lighting
camera noise
atmosphere interference
data bit falls between 2 pixels and is read incorrectly
etc
Given a 4 bit message (m) we can encode it using a code generator matrix ($G_{4 \times 7}$). We can also check the parity of an encoded message (e) using a parity check matrix ($H_{3 \times 7}$). To decode an encoded message we will use a regeneration matrix (R). Where:
$$
G = \begin{bmatrix}
1 & 1 & 0 & 1 \
1 & 0 & 1 & 1 \
1 & 0 & 0 & 0 \
0 & 1 & 1 & 1 \
0 & 1 & 0 & 0 \
0 & 0 & 1 & 0 \
0 & 0 & 0 & 1
\end{bmatrix} \
H = \begin{bmatrix}
1 & 1 & 1 & 0 & 1 & 0 & 0 \
1 & 0 & 1 & 1 & 0 & 1 & 0 \
1 & 1 & 0 & 1 & 0 & 0 & 1
\end{bmatrix} \
R = \begin{bmatrix}
1 & 0 & 1 & 0 & 1 & 0 & 1 \
0 & 1 & 1 & 0 & 0 & 1 & 1 \
0 & 0 & 0 & 1 & 1 & 1 & 1
\end{bmatrix} \
e = G \times m = \begin{bmatrix} d_1 & d_2 & d_3 & d_4 & p_1 & p_2 & p_3 \end{bmatrix} \
\text{parity check} = H \times e \
m = R \times e
$$
A good message has the parity check result in $\begin{bmatrix} 0 & 0 & 0 \end{bmatrix}$. If an error is present, the parity bits form a binary number which tells you which of the 7 bits is flipped. Again, this can only handle 1 error and correct.
Step 1: Find the Pattern
Once the marker’s borders are found in the image, we are looking at four specific squares place at the corners of our 5×5 pattern (see the picture). These registration marks tell us where the data and parity bits are in the 5x5 array.
Step 2: Read the Signature
Once the orientation is decided we can construct the signature. In the 5×5 case it’s straightforward to read 3 signatures that contains 7 bits. Then for each signature:
compute the binary parity vector (composed of 3 bits) and check if any error,
if any error, correct it using the binary parity vector corresponding value,
then extract the 4 bits of data and group them using the 3 signatures.
Step 3: Calculate the Code
Finally, using the bits of data contained in the 3 signatures, compute the code that corresponds to this binary vector.
Once errors are checked and corrected, the 3 signatures (green, red and blue areas) are used to generate the binary code to decipher (12 bits aligned at the bottom). So our marker has 5 x 5 bits (black or white squares) which give us:
4 are used to understand orientation, the outter corners
9 are used to control errors and correct (if possible)
12 are used for our id
Thus we have a marker than can have a $2^{12}$ bit number with a value between 0 - 4095.
Let's Try It Out
Here is what we are going to do:
setup our environment
play with Hamming Codes to understand better
read in the image
run it through the ar_markers detection function
End of explanation
for i in range(7):
e = fix_binary(G.dot(msg))
print('Corrupt bit', i+1, '--------------------------------------')
e[i] = 0 if e[i] == 1 else 1
parity_check = fix_binary(H.dot(e))
print(' parity check:', parity_check)
decoded_msg = R.dot(e)
Explanation: 1 Error
We are going to corrupt each bit one at a time and detect it ... we are not fixing it. Notice if you read the parity code from right to left [$bit_0$, $bit_1$, $bit_2$], you can determine which bit needs to get flipped so the message is correct again.
End of explanation
for i in range(7):
e = fix_binary(G.dot(msg))
e[i] = 0 if e[i] == 1 else 1
j = (i+1)%7
e[j] = 0 if e[j] == 1 else 1
print('Corrupt bit', i+1, 'and', j+1, '--------------------------------------')
parity_check = fix_binary(H.dot(e))
print(' parity check:', parity_check)
decoded_msg = R.dot(e)
Explanation: 2 Errors
We are going to corrupt each bit one at a time and detect it ... we are not fixing it. Notice now, we can't identify the incorrect bit, but we know something bad is happening. Notice corrupted bit 1 and 6 give the same parity check: [1, 1, 0]. If you need protection from more than 1 error, then you need to select a different algorithm.
End of explanation
img = cv2.imread('ar_marker_pics/flyer-hamming.png')
plt.imshow(img);
print('image dimensions [width, height, color depth]:', img.shape)
markers = detect_markers(img)
# let's print the id of the marker
print(markers[0].id)
Explanation: Let's Try Markers Now
End of explanation |
13,946 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started With Python
Installation
There are various ways to install Python. Assuming the reader is not (yet) well versed in programming, I suggest to download the Anaconda distribution, which works for all major operating systems (Mac OS X, Windows, Linux) and provides a fully fledged Python installation including necessary libraries and Jupyter notebooks.
Go to https
Step1: Data Types in Python
Building Blocks
To analyze data in Python, data has to be stored as some kind of data type. These data types form the structure of the data and make data easily accessible. Python's basic building blocks are
Step2: Running Code in Jupyter
How did I execute this code section? If I only want to run a code cell, I select the cell and hit ctrl + enter. If you wish to run the entire notebook, go to Kernel (dropdown menu) and select "Restart & Run All". There are lots of shortcuts that let you handle a Jupyter notebook just from the keyboard. Press H on your keyboard (or go to Help/Keyboard Shortcuts) to see all of the shortcuts.
Simple Arithmetics
Simple arithmetic operations are straight forward
Step3: We can even use arithmetic operators to concatenate strings
Step4: Lists
Now let's look at lists. Lists are capable of combining multiple data types.
Step5: Note that the third element in list e is set in quotation marks and thus Python interprets this as string.
NumPy Arrays
NumPy Arrays from Lists
For as useful list appear, its flexibility come at a high cost. Because each element contains not only the value itself but also information about the data type, storing data in list consumes a lot of memory. For this reason the Python community introduced NumPy (short for Numerical Python). Among other things, this package provides fixed-type arrays which are more efficient to store and operate on dense data than simple lists.
Fixed-type arrays are dense arrays of uniform type. Uniform here means that all entries of the array have the same data type, e.g. all are floating point numbers. We start by importing the NumPy package (following general convention we import this package under the alias name np) and create some simple NumPy arrays form Python lists.
Step6: Similarly, we can explicitly set the data type
Step7: We've seen above that we can define the data type for NumPy arrays. A list of available data types can be found in the NumPy documentation.
NumPy Arrays from Scratch
Sometimes it is helpful to create arrays from scratch. Here are some examples
Step8: Note that I do not need to use np.ones(shape=(2, 3), dtype='float32') to define the shape. It's ok to go with the short version as long as the order is correct, Python will understand. However, going with the explicit version helps to make your code more readable and I encourage students to follow this advice.
Each predefined function is documented in a help page. One can search for it by calling ?np.ones, np.ones? or help(np.ones). If it is not clear how the function is precisely called, it is best to use * (wildcard character) - as in np.*ne*?. This will list all functions in the NumPy library which contain 'ne' in their name. In our example np.ones? shows the following details
Step9: The function description also shows examples of how the function can be used. Often this is very helpful, but for the sake of brevity, these are omitted here.
In the function description we see that the order of arguments is shape, dtype, order. If our inputs are in this order, one does not need to specify the argument. Furthermore, we see that dtype and order is optional, meaning that if left undefined, Python will simply use the default argument. This makes for a very short code. However, going with the longer explicit version helps to make your code more readable and I encourage students to follow this advice.
Step10: Arrays with random variables are easily created. Below three examples. See the numpy.random documentation page for details on how to generate other random variable-arrays.
Step11: NumPy Array Attributes
Each NumPy array has certain attributes.
Here are some attributes we can call
Step12: A brief note regarding the setting of the seed
Step13: And here's how we call for these properties
Step14: Index
Step15: To access the end of an array, you can also use negative indices
Step16: Arrays are also possible as inputs
Step17: Knowing the index we can also replace elements of an array
Step18: IMPORTANT NOTE
Step19: Array Slicing
We can also use square brackets to access a subset of the data. The syntax is
Step20: Array slicing works the same for multidimensional arrays.
Step21: IMPORTANT NOTE
Step22: Concatenating, Stacking and Splitting
Often it is useful to combine multiple arrays into one or to split a single array into multiple arrays. To accomplish this, we can use NumPy's concatenate and vstack/hstack function.
Step23: The opposite of concatenating is splitting. Numpy has np.split, np.hsplit and np.vsplit functions. Each of these takes a list of indices, giving the split points, as input.
Step24: Conditions
Boolean Operators
Boolean operators check an input and return either True (equals 1 as value) or False (equals 0). This is often very helpful if one wants to check for conditions or sort out part of a data set which meet a certain condition. Here are the common comparison operators
Step25: If ... else statements
These statements check a given condition and depending on the result (True, False) execute a subsequent code. As usual, an example will do. Notice that indentation is necessary for Python to correctly compile the code.
Step26: It is also possible to have more than one condition as the next example shows.
Step27: Combining these two statements would make for a nested if ... else statement.
Step28: Loops
"For" Loops
"For" loops iterate over a given sequence. They are very easy to implement as the following example shows. We start with an example and give some explanations afterwards.
For our example, let's assume you ought to sum up the integer values of a sequence from 10 to 1 with a loop. There are obviously more efficient ways of doing this but this serves well as an introductory example. From primary school we know the result is easily calculated as
$$
\begin{equation}
\sum_{i=1}^n x_i = \dfrac{n (n+1)}{2} \qquad -> \qquad \dfrac{10 \cdot 11}{2} = 55
\end{equation}
$$
Step29: A few imprtant notes
Step30: "While" Loops
"While" loops execute as long as a certain boolean condition is met. Picking up the above example we can formulate the following loop
Step32: Functions
Functions come into play when either a task needs to be performed more than once or when it helps to reduce the complexity of a code.
Following up on our play examples from above, let us assume we are tasked to write a function which sums up all even and all odd integers of a vector.
Step33: Commenting
Above code snippet not only shows how functions are set up but also displays the importance of comments. Comments are preceeded by a hash sign (#), such that the interpreter will not parse what follows the hash. When programming, you should always comment your code to notate your work. This details your steps/thoughts/ideas not only for other developers but also for you when you pick up your code some time after writing it. Good programmers make heavy use of commenting and I strongly encourage the reader to follow this standard.
Slowness of Loops
It is at this point important to note that loops should only be used as a last resort. Below we show why. The first code runs our previously defined function. The second code uses NumPy's built-in function.
Step34: Above timing results show what was hinted before | Python Code:
from IPython.display import YouTubeVideo
from datetime import timedelta
YouTubeVideo('jZ952vChhuI')
Explanation: Getting Started With Python
Installation
There are various ways to install Python. Assuming the reader is not (yet) well versed in programming, I suggest to download the Anaconda distribution, which works for all major operating systems (Mac OS X, Windows, Linux) and provides a fully fledged Python installation including necessary libraries and Jupyter notebooks.
Go to https://www.anaconda.com/distribution/#download-section
Download latest version corresponding to your OS
Run .exe/.pkg file.
Make sure to set flag "Add Anaconda to my PATH environment variable" and
"Register Anaconda as my default Python 3.x"
Be sure to download the latest Python 3.x version (not 2.x; this older Python version should by now no longer be a topic and backward compability is anyway no longer given!).
The installation should not cause any problems. If you wisch a step-by-step guide (incl. some further insights) see Cyrille Rossants excellent notebook or simply the Jupyter Documentation on the topic.
IPyton / Jupyter Notebooks
What you see here is a so called Jupyter notebook. It makes it possible to interactively combine code with output (results, graphics), markdown text and LaTeX (for mathematical expressions). All codes discussed in this course will be provided through such notebooks and you soon will understand and appreciate the functionality they provide.
The basic markdown commands are well summarized here.
LaTeX is a typesetting language with extensive capabilities to typeset math. For a basic introductin to math in LaTeX see sections 3.3 - 3.4 (p. 22 - 33) of More Math into LaTeX by Grätzer (2007), available as pdf here. Also very insightful are the video tutorials by Dr. Trefor Bazett on YouTube, see e.g. here.
If you are keen on learning more about IPython/Jupyter, consider this notebook - a well written introduction by Cyrille Rossant. A more comprehensive intro can be found here and a short video-intro is provided below.
End of explanation
a = 42 # In VBA you would first have to define data type, only then the value: Dim a as integer; a = 42
b = 10.3 # VBA: Dim b as Double; b = 10.3
c = 'hello' # VBA: Dim c as String; c = "hello"
d = True # VBA: Dim d as Boolean; d = True
print('a: ', (a))
print('b: ', type(b))
print('c: ', type(c))
print('d: ', type(d))
Explanation: Data Types in Python
Building Blocks
To analyze data in Python, data has to be stored as some kind of data type. These data types form the structure of the data and make data easily accessible. Python's basic building blocks are:
* Numbers (integer, floating point, and complex)
* Booleans (true/false)
* Strings
* Lists
* Dictionaries
* Tuples
We will discuss the first four data types above as these are relevant for us.
Python is a dynamically typed language, meaning that - unlike in static languages such as VBA, C, Java etc. - you do not explicitly need to assign a data type to a variable. Python will do that for you. A few examples will explain this best:
End of explanation
a = 2 + 4 - 8 # Addition & Subtraction
b = 6 * 7 / 3 - 2 # Multiplication & Division
c = 2**(1/2) # Exponents & Square root
d = 10 % 3 # Modulus
e = 10 // 3 # Floor division
print(' a =', a, '\n',
'b =', b, '\n',
'c =', c, '\n',
'd =', d, '\n',
'e =', e)
Explanation: Running Code in Jupyter
How did I execute this code section? If I only want to run a code cell, I select the cell and hit ctrl + enter. If you wish to run the entire notebook, go to Kernel (dropdown menu) and select "Restart & Run All". There are lots of shortcuts that let you handle a Jupyter notebook just from the keyboard. Press H on your keyboard (or go to Help/Keyboard Shortcuts) to see all of the shortcuts.
Simple Arithmetics
Simple arithmetic operations are straight forward:
End of explanation
a = 'Hello'
b = 'World!'
print(a + ' ' + b)
print(a * 3)
Explanation: We can even use arithmetic operators to concatenate strings:
End of explanation
e = ['Calynn', 'Dillon', '10.3', c, d]
print('e: ', type(e))
print([type(item) for item in e])
Explanation: Lists
Now let's look at lists. Lists are capable of combining multiple data types.
End of explanation
import numpy as np
# Integer array
np.array([3, 18, 12])
# Floating point array
np.array([3., 18, 12])
Explanation: Note that the third element in list e is set in quotation marks and thus Python interprets this as string.
NumPy Arrays
NumPy Arrays from Lists
For as useful list appear, its flexibility come at a high cost. Because each element contains not only the value itself but also information about the data type, storing data in list consumes a lot of memory. For this reason the Python community introduced NumPy (short for Numerical Python). Among other things, this package provides fixed-type arrays which are more efficient to store and operate on dense data than simple lists.
Fixed-type arrays are dense arrays of uniform type. Uniform here means that all entries of the array have the same data type, e.g. all are floating point numbers. We start by importing the NumPy package (following general convention we import this package under the alias name np) and create some simple NumPy arrays form Python lists.
End of explanation
np.array([3, 18, 12], dtype='float32')
# Multidimensional arrays
np.array([range(i, i + 3) for i in [1, 2, 3]])
Explanation: Similarly, we can explicitly set the data type:
End of explanation
# Integer array with 8 zeros
np.zeros(shape=8, dtype='int')
# 2x3 floating-point array filled with 1s
np.ones((2, 3), 'float32')
Explanation: We've seen above that we can define the data type for NumPy arrays. A list of available data types can be found in the NumPy documentation.
NumPy Arrays from Scratch
Sometimes it is helpful to create arrays from scratch. Here are some examples:
End of explanation
#np.one?
Explanation: Note that I do not need to use np.ones(shape=(2, 3), dtype='float32') to define the shape. It's ok to go with the short version as long as the order is correct, Python will understand. However, going with the explicit version helps to make your code more readable and I encourage students to follow this advice.
Each predefined function is documented in a help page. One can search for it by calling ?np.ones, np.ones? or help(np.ones). If it is not clear how the function is precisely called, it is best to use * (wildcard character) - as in np.*ne*?. This will list all functions in the NumPy library which contain 'ne' in their name. In our example np.ones? shows the following details:
End of explanation
# 3x2 array filled with 2.71
np.full(shape=(3, 2), fill_value=2.71)
# 3x3 boolean array filled with 'True'
np.full((2, 2), 1, bool)
# Array filled with linear sequence
np.arange(start = 0, stop = 1, step = 0.1) # or simply np.arrange(0, 1, 0.1)
# Array of evenly spaced values
np.linspace(start = 0, stop = 1, num = 4)
Explanation: The function description also shows examples of how the function can be used. Often this is very helpful, but for the sake of brevity, these are omitted here.
In the function description we see that the order of arguments is shape, dtype, order. If our inputs are in this order, one does not need to specify the argument. Furthermore, we see that dtype and order is optional, meaning that if left undefined, Python will simply use the default argument. This makes for a very short code. However, going with the longer explicit version helps to make your code more readable and I encourage students to follow this advice.
End of explanation
# 4x4 array of uniformly distributed random variables
np.random.random_sample(size = (4, 4))
# 3x3 array of normally distributed random variables (with mean = 4, sd = 6)
np.random.normal(loc = 4, scale = 6, size = (3, 3))
# 3x3 array of random integers in interval [0, 15)
np.random.randint(low = 0, high = 15, size = (3, 3))
# 4x4 identity matrix
np.eye(4)
Explanation: Arrays with random variables are easily created. Below three examples. See the numpy.random documentation page for details on how to generate other random variable-arrays.
End of explanation
np.random.seed(1234) # Set seed for reproducibility
x = np.random.randint(10, size = 6) # 1-dimensional array (vector)
y = np.random.randint(10, size = (3, 4)) # 2-dimensional array (matrix)
z = np.random.randint(10, size = (3, 4, 5)) # 3-dimensional array
Explanation: NumPy Array Attributes
Each NumPy array has certain attributes.
Here are some attributes we can call:
| Attribute | Description |
|-----------|------------------------|
| ndim | No. of dimensions |
| shape | Size of each dimension |
| size | Total size of array |
| dtype | Data type of array |
| itemsize | Size (in bytes) |
| nbytes | Total size (in bytes) |
To show how one can access them we'll define three arrays.
End of explanation
rng = np.random.default_rng(seed=42)
rng.random(size=(3, 3))
Explanation: A brief note regarding the setting of the seed: Numpy suggests a new way of setting the seed. Details see here. This is what is suggested moving forward:
End of explanation
print(' ', x, '\n\n', y, '\n\n', z)
print('ndim: ', z.ndim)
print('shape: ', z.shape)
print('size: ', z.size)
print('data type: ', z.dtype)
print('itemsize: ', z.itemsize)
print('nbytes: ', z.nbytes)
Explanation: And here's how we call for these properties:
End of explanation
print(e, '\n') # List from above
print(x, '\n') # One dimensional np array from above
print(y, '\n') # Two dimensional np array from above
e[2]
e[2] * 2
x[5]
y[2, 0] # Note again that [m, n] starts counting for both rows (m) as well as columns (n) from 0
Explanation: Index: How to Access Elements
What might be a bit counterintuitive at the beginning is that Python's indexing starts at 0. Other than that, accessing the $i$'th element (starting at 0) of a list or a array is straight forward.
End of explanation
e[-1]
y[-2, 2]
Explanation: To access the end of an array, you can also use negative indices:
End of explanation
ind = [3, 5, -4]
x[ind]
x = np.arange(12).reshape((3, 4))
print(x)
row = np.array([1, 2])
col = np.array([0, 3])
x[row, col]
Explanation: Arrays are also possible as inputs:
End of explanation
x[0] = 99
x
Explanation: Knowing the index we can also replace elements of an array:
End of explanation
x[0] = 3.14159; x
Explanation: IMPORTANT NOTE:
NumPy arrays have a fixed type. This means that e.g. if you insert a floating-point value to an integer array, the value will be truncated!
End of explanation
x = np.arange(10)
x
x[:3] # First three elements
x[7:] # Elements AFTER 7th element
x[4:8] # Element 5, 6, 7 and 8
x[::2] # Even elements
x[1::2] # Odd elements
x[::-1] # All elements reversed
x[::-2] # Odd elements reversed
Explanation: Array Slicing
We can also use square brackets to access a subset of the data. The syntax is:
x[start:stop:step]
The default values are: start=0, stop='size of dimension', step=1
End of explanation
y # from above
y[:2, :3] # Rows 0 and 1, columns 0, 1, 2
y[:, 2] # Third column
y[0, :] # First row
Explanation: Array slicing works the same for multidimensional arrays.
End of explanation
ySub = y[:2, :2]
print(ySub)
ySub[0, 0] = 99
print(ySub, '\n')
print(y)
ySubCopy = y[:2, :2].copy()
ySubCopy[0, 0] = 33
print(ySubCopy, '\n')
print(y)
Explanation: IMPORTANT NOTE:
When slicing and assigning part of an existing array to a new variable, the new variable will only hold a "view" but not a copy. This means, that if you change a value in the new array, the original array will also be changed. The idea behind this is to save memory. But fear not: with the ".copy()" method, you still can get a true copy.
Here a few corresponding examples for better understanding:
End of explanation
x = np.array([1, 2, 3])
y = np.array([11, 12, 13])
z = np.array([21, 22, 23])
np.concatenate([x, y, z])
# Stack two vectors horizontally
np.hstack([x, y])
# Stack two vectors vertically
np.vstack([x, y])
# Stack matrix with column vector
m = np.arange(0, 9, 1).reshape((3, 3))
np.vstack([m, z])
# Stack matrix with row vector
np.hstack([m, z.reshape(3, 1)])
Explanation: Concatenating, Stacking and Splitting
Often it is useful to combine multiple arrays into one or to split a single array into multiple arrays. To accomplish this, we can use NumPy's concatenate and vstack/hstack function.
End of explanation
x = np.arange(8.0)
a, b, c = np.split(x, [3, 5])
print(a, b, c)
x = np.arange(16).reshape(4, 4)
upper, lower = np.vsplit(x, [3])
print(upper, '\n\n', lower)
left, right = np.hsplit(x, [2])
print(left, '\n\n', right)
Explanation: The opposite of concatenating is splitting. Numpy has np.split, np.hsplit and np.vsplit functions. Each of these takes a list of indices, giving the split points, as input.
End of explanation
x = np.arange(start=0, stop=8, step=1)
print(x)
print(x == 2)
print(x != 3)
print((x < 2) | (x > 6))
# Notice the difference
print(x[x <= 4])
print(x <= 4)
Explanation: Conditions
Boolean Operators
Boolean operators check an input and return either True (equals 1 as value) or False (equals 0). This is often very helpful if one wants to check for conditions or sort out part of a data set which meet a certain condition. Here are the common comparison operators:
| Operator | Description |
|:------------:|----------------------------|
| == | equal ($=$) |
| != | not equal ($\neq$) |
| < | less than ($<$) |
| <= | less or equal ($\leq$) |
| > | greater ($>$) |
| >= | greater or equal ($\geq$) |
| & | Mathematical AND ($\land$) |
| | | Mathematical OR ($\lor$) |
| in | element of ($\in$) |
The following sections give a glimpse of how these operators can be used.
End of explanation
x = 4
if x%2 == 0:
print(x, 'is an even number')
else:
print(x, 'is an odd number')
Explanation: If ... else statements
These statements check a given condition and depending on the result (True, False) execute a subsequent code. As usual, an example will do. Notice that indentation is necessary for Python to correctly compile the code.
End of explanation
x = 20
if x > 0:
print(x, 'is positive')
elif x < 0:
print(x, 'is negative')
else:
print(x, 'is neither strictly positive nor strictly negative')
Explanation: It is also possible to have more than one condition as the next example shows.
End of explanation
x = -3
if x > 0:
if (x%2) == 0:
print(x, 'is positive and even')
else:
print(x, 'is positive and odd')
elif x < 0:
if (x%2) == 0:
print(x, 'is negative and even')
else:
print(x, 'is negative and odd')
else:
print(x, 'is 0')
Explanation: Combining these two statements would make for a nested if ... else statement.
End of explanation
seq = np.arange(start=10, stop=0, step=-1)
seq
seqSum = 0
for value in seq:
seqSum = seqSum + value
seqSum
Explanation: Loops
"For" Loops
"For" loops iterate over a given sequence. They are very easy to implement as the following example shows. We start with an example and give some explanations afterwards.
For our example, let's assume you ought to sum up the integer values of a sequence from 10 to 1 with a loop. There are obviously more efficient ways of doing this but this serves well as an introductory example. From primary school we know the result is easily calculated as
$$
\begin{equation}
\sum_{i=1}^n x_i = \dfrac{n (n+1)}{2} \qquad -> \qquad \dfrac{10 \cdot 11}{2} = 55
\end{equation}
$$
End of explanation
seq = seq.reshape(2, 5)
seq
seqSum = 0
row = seq.shape[0]
col = seq.shape[1]
for rowIndex in range(0, row):
for colIndex in range(0, col):
seqSum = seqSum + seq[rowIndex, colIndex]
seqSum
Explanation: A few imprtant notes:
* Indentation is not just here for better readability of the code but it is actually necessary for Python to correctly interpret the code.
* Though it is not necessary, we initiate seqSum = 0 here. Otherwise, if we run the code repeatedly we add to the previous total!
* value takes on every value in array seq. In the first loop value=10, second loop value=9, etc.
Loops can be nested, too. Here's an example.
End of explanation
seqSum = 0
i = 10
while i >= 1:
seqSum = seqSum + i
i = i - 1 # Also: i -= 1
print(seqSum)
Explanation: "While" Loops
"While" loops execute as long as a certain boolean condition is met. Picking up the above example we can formulate the following loop:
End of explanation
def sumOddEven(vector):
Calculates sum of odd and even numbers in array.
Args:
vector: NumPy array of length n
Returns:
odd: Sum of odd numbers
even: Sum of even numbers
# Initiate values
odd = 0
even = 0
# Loop through values of array; check for each
# value whether it is odd or even and add to
# previous total.
for value in vector:
if (value % 2) == 0:
even = even + value
else:
odd = odd + value
return odd, even
# Initiate array [1, 2, ..., 99, 100]
seq = np.arange(1, 101, 1)
# Apply function and print results
odd, even = sumOddEven(seq)
print('Odd: ', odd, ', ', 'Even: ', even)
Explanation: Functions
Functions come into play when either a task needs to be performed more than once or when it helps to reduce the complexity of a code.
Following up on our play examples from above, let us assume we are tasked to write a function which sums up all even and all odd integers of a vector.
End of explanation
%%timeit
seq = np.arange(1,10001, 1)
sumOddEven(seq)
%%timeit
seq[(seq % 2) == 0].sum()
seq[(seq % 2) == 1].sum()
Explanation: Commenting
Above code snippet not only shows how functions are set up but also displays the importance of comments. Comments are preceeded by a hash sign (#), such that the interpreter will not parse what follows the hash. When programming, you should always comment your code to notate your work. This details your steps/thoughts/ideas not only for other developers but also for you when you pick up your code some time after writing it. Good programmers make heavy use of commenting and I strongly encourage the reader to follow this standard.
Slowness of Loops
It is at this point important to note that loops should only be used as a last resort. Below we show why. The first code runs our previously defined function. The second code uses NumPy's built-in function.
End of explanation
M = np.ones(shape=(3, 3))
v = np.array([1, 2, 3])
M + v
# Notice the difference
vecAdd = v + v
broadAdd = v.reshape((3, 1)) + v
print(vecAdd, '\n')
print(broadAdd)
Explanation: Above timing results show what was hinted before: In 9'999 out of a 10'000 cases it is significantly faster using already built in functions compared to loops. The simple reason is that modules such as NumPy or Pandas use (at their core) optimized compile code to calculate the results and this is most certainly faster than a loop.
So in summary: Above examples helped introduce if statements, loops and functions. In real life, however, you should check if Python does not already offer a built-in function for your task. If yes, make sure to use it.
Broadcasting
Computations on Arrays
In closing this chapter we briefly introduce NumPy's broadcasting functionality. Rules for matrix arithmetic apply to NumPy arrays as one would expect and it is left to the reader to explore it. Broadcasting, however, goes one step further in that it allows for element-by-element operations on arrays (and matrices) of different dimensions - which under normal rules would not be compatible. An example shows this best.
End of explanation |
13,947 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
TensorFlow Tutorial #01
Simple Linear Model
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.
Imports
Step1: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step2: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
Step3: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
Step4: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are
Step5: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
Step6: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
Step7: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
Step8: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
Step9: Plot a few images to see if data is correct
Step10: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below
Step11: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
Step12: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
Step13: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
Step14: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
Step15: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
Step16: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
Step17: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
Step18: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
Step19: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
Step20: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
Step21: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
Step22: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
Step23: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
Step24: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
Step25: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
Step26: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
Step27: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
Step28: Function for printing the classification accuracy on the test-set.
Step29: Function for printing and plotting the confusion matrix using scikit-learn.
Step30: Function for plotting examples of images from the test-set that have been mis-classified.
Step31: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
Step32: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
Step33: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
Step34: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
Step35: Performance after 10 optimization iterations
Step36: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
Step37: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
Step38: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
Step39: We are now done using TensorFlow, so we close the session to release its resources. | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
Explanation: TensorFlow Tutorial #01
Simple Linear Model
by Magnus Erik Hvass Pedersen
/ GitHub / Videos on YouTube
Introduction
This tutorial demonstrates the basic workflow of using TensorFlow with a simple linear model. After loading the so-called MNIST data-set with images of hand-written digits, we define and optimize a simple mathematical model in TensorFlow. The results are then plotted and discussed.
You should be familiar with basic linear algebra, Python and the Jupyter Notebook editor. It also helps if you have a basic understanding of Machine Learning and classification.
Imports
End of explanation
tf.__version__
Explanation: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version:
End of explanation
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets("data/MNIST/", one_hot=False)
Explanation: Load Data
The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
End of explanation
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
Explanation: The MNIST data-set has now been loaded and consists of 70.000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
End of explanation
data.test.labels[0:5]
data.train.labels[0:5]
Explanation: One-Hot Encoding
The data-set has been loaded as so-called One-Hot encoding. This means the labels have been converted from a single number to a vector whose length equals the number of possible classes. All elements of the vector are zero except for the $i$'th element which is one and means the class is $i$. For example, the One-Hot encoded labels for the first 5 images in the test-set are:
End of explanation
data.test.cls = data.test.labels #np.array([label.argmax() for label in data.test.labels])
data.train.cls = data.train.labels #np.array([label.argmax() for label in data.train.labels])
Explanation: We also need the classes as single numbers for various comparisons and performance measures, so we convert the One-Hot encoded vectors to a single number by taking the index of the highest element. Note that the word 'class' is a keyword used in Python so we need to use the name 'cls' instead.
End of explanation
data.test.cls[0:5]
data.train.cls[0:5]
Explanation: We can now see the class for the first five images in the test-set. Compare these to the One-Hot encoded vectors above. For example, the class for the first image is 7, which corresponds to a One-Hot encoded vector where all elements are zero except for the element with index 7.
End of explanation
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of classes, one class for each of 10 digits.
num_classes = 10
Explanation: Data dimensions
The data dimensions are used in several places in the source-code below. In computer programming it is generally best to use variables and constants rather than having to hard-code specific numbers every time that number is used. This means the numbers only have to be changed in one single place. Ideally these would be inferred from the data that has been read, but here we just write the numbers.
End of explanation
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
Explanation: Helper-function for plotting images
Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
End of explanation
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
Explanation: Plot a few images to see if data is correct
End of explanation
x = tf.placeholder(tf.float32, [None, img_size_flat])
Explanation: TensorFlow Graph
The entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.
TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.
TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.
A TensorFlow graph consists of the following parts which will be detailed below:
Placeholder variables used to change the input to the graph.
Model variables that are going to be optimized so as to make the model perform better.
The model which is essentially just a mathematical function that calculates some output given the input in the placeholder variables and the model variables.
A cost measure that can be used to guide the optimization of the variables.
An optimization method which updates the variables of the model.
In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial.
Placeholder variables
Placeholder variables serve as the input to the graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.
First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional vector or matrix. The data-type is set to float32 and the shape is set to [None, img_size_flat], where None means that the tensor may hold an arbitrary number of images with each image being a vector of length img_size_flat.
End of explanation
y_true = tf.placeholder(tf.int64, [None])
Explanation: Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable x. The shape of this placeholder variable is [None, num_classes] which means it may hold an arbitrary number of labels and each label is a vector of length num_classes which is 10 in this case.
End of explanation
y_true_cls = tf.placeholder(tf.int64, [None])
Explanation: Finally we have the placeholder variable for the true class of each image in the placeholder variable x. These are integers and the dimensionality of this placeholder variable is set to [None] which means the placeholder variable is a one-dimensional vector of arbitrary length.
End of explanation
weights = tf.Variable(tf.zeros([img_size_flat, num_classes]))
Explanation: Variables to be optimized
Apart from the placeholder variables that were defined above and which serve as feeding input data into the model, there are also some model variables that must be changed by TensorFlow so as to make the model perform better on the training data.
The first variable that must be optimized is called weights and is defined here as a TensorFlow variable that must be initialized with zeros and whose shape is [img_size_flat, num_classes], so it is a 2-dimensional tensor (or matrix) with img_size_flat rows and num_classes columns.
End of explanation
biases = tf.Variable(tf.zeros([num_classes]))
Explanation: The second variable that must be optimized is called biases and is defined as a 1-dimensional tensor (or vector) of length num_classes.
End of explanation
logits = tf.matmul(x, weights) + biases
Explanation: Model
This simple mathematical model multiplies the images in the placeholder variable x with the weights and then adds the biases.
The result is a matrix of shape [num_images, num_classes] because x has shape [num_images, img_size_flat] and weights has shape [img_size_flat, num_classes], so the multiplication of those two matrices is a matrix with shape [num_images, num_classes] and then the biases vector is added to each row of that matrix.
Note that the name logits is typical TensorFlow terminology, but other people may call the variable something else.
End of explanation
y_pred = tf.nn.softmax(logits)
Explanation: Now logits is a matrix with num_images rows and num_classes columns, where the element of the $i$'th row and $j$'th column is an estimate of how likely the $i$'th input image is to be of the $j$'th class.
However, these estimates are a bit rough and difficult to interpret because the numbers may be very small or large, so we want to normalize them so that each row of the logits matrix sums to one, and each element is limited between zero and one. This is calculated using the so-called softmax function and the result is stored in y_pred.
End of explanation
y_pred_cls = tf.argmax(y_pred, dimension=1)
Explanation: The predicted class can be calculated from the y_pred matrix by taking the index of the largest element in each row.
End of explanation
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits,
labels=y_true)
Explanation: Cost-function to be optimized
To make the model better at classifying the input images, we must somehow change the variables for weights and biases. To do this we first need to know how well the model currently performs by comparing the predicted output of the model y_pred to the desired output y_true.
The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the weights and biases of the model.
TensorFlow has a built-in function for calculating the cross-entropy. Note that it uses the values of the logits because it also calculates the softmax internally.
End of explanation
cost = tf.reduce_mean(cross_entropy)
Explanation: We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
End of explanation
optimizer = tf.train.AdamOptimizer(learning_rate=0.001).minimize(cost)
Explanation: Optimization method
Now that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the basic form of Gradient Descent where the step-size is set to 0.5.
Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
End of explanation
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
Explanation: Performance measures
We need a few more performance measures to display the progress to the user.
This is a vector of booleans whether the predicted class equals the true class of each image.
End of explanation
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Explanation: This calculates the classification accuracy by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then calculating the average of these numbers.
End of explanation
session = tf.Session()
Explanation: TensorFlow Run
Create TensorFlow session
Once the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
End of explanation
session.run(tf.initialize_all_variables())
Explanation: Initialize variables
The variables for weights and biases must be initialized before we start optimizing them.
End of explanation
batch_size = 1000
Explanation: Helper-function to perform optimization iterations
There are 50.000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore use Stochastic Gradient Descent which only uses a small batch of images in each iteration of the optimizer.
End of explanation
def optimize(num_iterations):
for i in range(num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
# Note that the placeholder for y_true_cls is not set
# because it is not used during training.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
Explanation: Function for performing a number of optimization iterations so as to gradually improve the weights and biases of the model. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples.
End of explanation
feed_dict_test = {x: data.test.images,
y_true: data.test.labels,
y_true_cls: data.test.cls}
Explanation: Helper-functions to show performance
Dict with the test-set data to be used as input to the TensorFlow graph. Note that we must use the correct names for the placeholder variables in the TensorFlow graph.
End of explanation
def print_accuracy():
# Use TensorFlow to compute the accuracy.
acc = session.run(accuracy, feed_dict=feed_dict_test)
# Print the accuracy.
print("Accuracy on test-set: {0:.1%}".format(acc))
Explanation: Function for printing the classification accuracy on the test-set.
End of explanation
def print_confusion_matrix():
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the predicted classifications for the test-set.
cls_pred = session.run(y_pred_cls, feed_dict=feed_dict_test)
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
# Make various adjustments to the plot.
plt.tight_layout()
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
Explanation: Function for printing and plotting the confusion matrix using scikit-learn.
End of explanation
def plot_example_errors():
# Use TensorFlow to get a list of boolean values
# whether each test-image has been correctly classified,
# and a list for the predicted class of each image.
correct, cls_pred, logits_view, y_pred_view = session.run([correct_prediction, y_pred_cls, logits, y_pred],
feed_dict=feed_dict_test)
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
logits_view = logits_view[incorrect]
y_pred_view = y_pred_view[incorrect]
np.set_printoptions(suppress=True)
np.set_printoptions(precision=3)
# Print logits and softmax (y_pred) of logits, ir order
for i in range(9):
print( "Logits: %s" % (np.array( logits_view[i]) ) )
print( "Softmx: %s" % (np.array( y_pred_view[i]) ) )
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
Explanation: Function for plotting examples of images from the test-set that have been mis-classified.
End of explanation
def plot_weights():
# Get the values for the weights from the TensorFlow variable.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Create figure with 3x4 sub-plots,
# where the last 2 sub-plots are unused.
fig, axes = plt.subplots(3, 4)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Only use the weights for the first 10 sub-plots.
if i<10:
# Get the weights for the i'th digit and reshape it.
# Note that w.shape == (img_size_flat, 10)
image = w[:, i].reshape(img_shape)
# Set the label for the sub-plot.
ax.set_xlabel("Weights: {0}".format(i))
# Plot the image.
ax.imshow(image, vmin=w_min, vmax=w_max, cmap='seismic')
# Remove ticks from each sub-plot.
ax.set_xticks([])
ax.set_yticks([])
Explanation: Helper-function to plot the model weights
Function for plotting the weights of the model. 10 images are plotted, one for each digit that the model is trained to recognize.
End of explanation
print_accuracy()
plot_example_errors()
Explanation: Performance before any optimization
The accuracy on the test-set is 9.8%. This is because the model has only been initialized and not optimized at all, so it always predicts that the image shows a zero digit, as demonstrated in the plot below, and it turns out that 9.8% of the images in the test-set happens to be zero digits.
End of explanation
optimize(num_iterations=1)
print_accuracy()
plot_example_errors()
Explanation: Performance after 1 optimization iteration
Already after a single optimization iteration, the model has increased its accuracy on the test-set to 40.7% up from 9.8%. This means that it mis-classifies the images about 6 out of 10 times, as demonstrated on a few examples below.
End of explanation
plot_weights()
Explanation: The weights can also be plotted as shown below. Positive weights are red and negative weights are blue. These weights can be intuitively understood as image-filters.
For example, the weights used to determine if an image shows a zero-digit have a positive reaction (red) to an image of a circle, and have a negative reaction (blue) to images with content in the centre of the circle.
Similarly, the weights used to determine if an image shows a one-digit react positively (red) to a vertical line in the centre of the image, and react negatively (blue) to images with content surrounding that line.
Note that the weights mostly look like the digits they're supposed to recognize. This is because only one optimization iteration has been performed so the weights are only trained on 100 images. After training on several thousand images, the weights become more difficult to interpret because they have to recognize many variations of how digits can be written.
End of explanation
# We have already performed 1 iteration.
optimize(num_iterations=9)
print_accuracy()
plot_example_errors()
plot_weights()
Explanation: Performance after 10 optimization iterations
End of explanation
# We have already performed 10 iterations.
optimize(num_iterations=990)
print_accuracy()
plot_example_errors()
Explanation: Performance after 1000 optimization iterations
After 1000 optimization iterations, the model only mis-classifies about one in ten images. As demonstrated below, some of the mis-classifications are justified because the images are very hard to determine with certainty even for humans, while others are quite obvious and should have been classified correctly by a good model. But this simple model cannot reach much better performance and more complex models are therefore needed.
End of explanation
plot_weights()
Explanation: The model has now been trained for 1000 optimization iterations, with each iteration using 100 images from the training-set. Because of the great variety of the images, the weights have now become difficult to interpret and we may doubt whether the model truly understands how digits are composed from lines, or whether the model has just memorized many different variations of pixels.
End of explanation
print_confusion_matrix()
Explanation: We can also print and plot the so-called confusion matrix which lets us see more details about the mis-classifications. For example, it shows that images actually depicting a 5 have sometimes been mis-classified as all other possible digits, but mostly either 3, 6 or 8.
End of explanation
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
Explanation: We are now done using TensorFlow, so we close the session to release its resources.
End of explanation |
13,948 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample for KFServing SDK with a custom image
This is a sample for KFServing SDK using a custom image.
The notebook shows how to use KFServing SDK to create, get and delete InferenceService with a custom image.
Setup
Your ~/.kube/config should point to a cluster with KFServing installed.
Your cluster's Istio Ingress gateway must be network accessible.
Build the docker image we will be using.
The goal of custom image support is to allow users to bring their own wrapped model inside a container and serve it with KFServing. Please note that you will need to ensure that your container is also running a web server e.g. Flask to expose your model endpoints. This example extends kfserving.KFModel which uses the tornado web server.
To build and push with Docker Hub set the DOCKER_HUB_USERNAME variable below with your Docker Hub username
Step1: KFServing Client SDK
We will use the KFServing client SDK to create the InferenceService and deploy our custom image.
Step2: Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice using the endpoint spec.
To use a custom image we need to use V1alphaCustomSpec which takes a V1Container
from the kuberenetes library
Step3: Create the InferenceService
Call KFServingClient to create InferenceService.
Step4: Check the InferenceService
Step5: Run a prediction
Step6: Delete the InferenceService | Python Code:
# Set this to be your dockerhub username
# It will be used when building your image and when creating the InferenceService for your image
DOCKER_HUB_USERNAME = "your_docker_username"
%%bash -s "$DOCKER_HUB_USERNAME"
docker build -t $1/kfserving-custom-model ./model-server
%%bash -s "$DOCKER_HUB_USERNAME"
docker push $1/kfserving-custom-model
Explanation: Sample for KFServing SDK with a custom image
This is a sample for KFServing SDK using a custom image.
The notebook shows how to use KFServing SDK to create, get and delete InferenceService with a custom image.
Setup
Your ~/.kube/config should point to a cluster with KFServing installed.
Your cluster's Istio Ingress gateway must be network accessible.
Build the docker image we will be using.
The goal of custom image support is to allow users to bring their own wrapped model inside a container and serve it with KFServing. Please note that you will need to ensure that your container is also running a web server e.g. Flask to expose your model endpoints. This example extends kfserving.KFModel which uses the tornado web server.
To build and push with Docker Hub set the DOCKER_HUB_USERNAME variable below with your Docker Hub username
End of explanation
from kubernetes import client
from kubernetes.client import V1Container
from kfserving import KFServingClient
from kfserving import constants
from kfserving import utils
from kfserving import V1alpha2EndpointSpec
from kfserving import V1alpha2PredictorSpec
from kfserving import V1alpha2InferenceServiceSpec
from kfserving import V1alpha2InferenceService
from kfserving import V1alpha2CustomSpec
namespace = utils.get_default_target_namespace()
print(namespace)
Explanation: KFServing Client SDK
We will use the KFServing client SDK to create the InferenceService and deploy our custom image.
End of explanation
api_version = constants.KFSERVING_GROUP + '/' + constants.KFSERVING_VERSION
default_endpoint_spec = V1alpha2EndpointSpec(
predictor=V1alpha2PredictorSpec(
custom=V1alpha2CustomSpec(
container=V1Container(
name="kfserving-custom-model",
image=f"{DOCKER_HUB_USERNAME}/kfserving-custom-model"))))
isvc = V1alpha2InferenceService(api_version=api_version,
kind=constants.KFSERVING_KIND,
metadata=client.V1ObjectMeta(
name='kfserving-custom-model', namespace=namespace),
spec=V1alpha2InferenceServiceSpec(default=default_endpoint_spec))
Explanation: Define InferenceService
Firstly define default endpoint spec, and then define the inferenceservice using the endpoint spec.
To use a custom image we need to use V1alphaCustomSpec which takes a V1Container
from the kuberenetes library
End of explanation
KFServing = KFServingClient()
KFServing.create(isvc)
Explanation: Create the InferenceService
Call KFServingClient to create InferenceService.
End of explanation
KFServing.get('kfserving-custom-model', namespace=namespace, watch=True, timeout_seconds=120)
Explanation: Check the InferenceService
End of explanation
MODEL_NAME = "kfserving-custom-model"
%%bash --out CLUSTER_IP
INGRESS_GATEWAY="istio-ingressgateway"
echo "$(kubectl -n istio-system get service $INGRESS_GATEWAY -o jsonpath='{.status.loadBalancer.ingress[0].ip}')"
%%bash -s "$MODEL_NAME" --out SERVICE_HOSTNAME
echo "$(kubectl get inferenceservice $1 -o jsonpath='{.status.url}' | cut -d "/" -f 3)"
import requests
import json
with open('input.json') as json_file:
data = json.load(json_file)
url = f"http://{CLUSTER_IP.strip()}/v1/models/{MODEL_NAME}:predict"
headers = {"Host": SERVICE_HOSTNAME.strip()}
result = requests.post(url, data=json.dumps(data), headers=headers)
print(result.content)
Explanation: Run a prediction
End of explanation
KFServing.delete(MODEL_NAME, namespace=namespace)
Explanation: Delete the InferenceService
End of explanation |
13,949 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step50: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step52: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import random
#data_dir = './data/simpsons/moes_tavern_lines.txt'
#data_dir = './data/all/simpsons_all.csv'
data_dir = './data/all/simpsons_norm_names_all.csv'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
from collections import Counter
import problem_unittests as tests
from sklearn.feature_extraction.text import CountVectorizer
def create_lookup_tables(text, min_count=1):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
words = text
#cv = CountVectorizer()
#vectorized = cv.fit_transform(text)
#print(vectorized)
word_counts = Counter(words)
#word_counts_2 = Counter(word_counts)
for k in list(word_counts):
if word_counts[k] < min_count:
del word_counts[k]
sorted_vocab = sorted(word_counts, key=word_counts.get, reverse=True)
int_to_vocab = {ii: word for ii, word in enumerate(sorted_vocab)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
print(len(sorted_vocab))
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
return {
'.': '__period__',
',': '__comma__',
'"': '__double_quote__',
';': '__semi-colon__',
'!': '__exclamation__',
'?': '__question__',
'(': '__open_paren__',
')': '__close_paren__',
'--': '__dash__',
'\n': '__endline__'
}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
input_test = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='targets')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return input_test, targets, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
def get_init_cell(batch_size, rnn_size, lstm_layers=1, keep_prob=1.0):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:param lstm_layers: Number of layers to apply to LSTM
:param keep_prob: Dropout keep probability for cell
:return: Tuple (cell, initialize state)
# A basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size)
# Add dropout to the cell
dropout_wrapper = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([dropout_wrapper] * lstm_layers)
# Getting an initial state of all zeros
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), 'initial_state')
return cell, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
#embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
#embed = tf.nn.embedding_lookup(embedding, input_data)
#return embed
# consider using:
return tf.contrib.layers.embed_sequence(
input_data, vocab_size=vocab_size, embed_dim=embed_dim)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# note: third argument is placeholder for initial_state
outputs, final_state = tf.nn.dynamic_rnn(
cell=cell, inputs=inputs, dtype=tf.float32)
final_state = tf.identity(final_state, 'final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param embed_dim: Size of word embeddings to use
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
embedding = get_embed(input_data, vocab_size, embed_dim)
lstm_outputs, final_state = build_rnn(cell, embedding)
logits = tf.contrib.layers.fully_connected(
lstm_outputs,
vocab_size,
weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
biases_initializer=tf.zeros_initializer(),
activation_fn=None)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
n_batches = (len(int_text)-1)//(batch_size * seq_length)
int_text = int_text[:n_batches * batch_size * seq_length + 1]
int_text_input_seq = [int_text[i*seq_length:i*seq_length+seq_length] for i in range(0, n_batches * batch_size)]
int_text = int_text[1:]
int_text_output = [int_text[i*seq_length:i*seq_length+seq_length] for i in range(0, n_batches * batch_size)]
all_data = []
for row in range(n_batches):
input_cols = []
target_cols = []
for col in range(batch_size):
input_cols.append(int_text_input_seq[col * n_batches + row])
target_cols.append(int_text_output[col * n_batches + row])
all_data.append([input_cols, target_cols])
return np.array(all_data)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
# reminder: tune hyper params according to advice at
# check out https://nd101.slack.com/messages/C3PJV4741/convo/C3PJV4741-1490412688.590254/
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Number of embedding dimensions
embed_dim = 300
# Sequence Length
seq_length = 16
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 25
keep_prob = 1.0
lstm_layers = 2
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the text word embeddings.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size, lstm_layers=lstm_layers, keep_prob=keep_prob)
logits, final_state = build_nn(cell, embed_dim, input_text, vocab_size)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
input_tensor = loaded_graph.get_tensor_by_name('input:0')
init_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return input_tensor, init_state_tensor, final_state_tensor, probs_tensor
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def weighted_choice(choices):
Cribbed from http://stackoverflow.com/questions/3679694/a-weighted-version-of-random-choice
total = sum(w for c, w in choices)
r = random.uniform(0, total)
upto = 0
for c, w in choices:
if upto + w >= r:
return c
upto += w
assert False, "Shouldn't get here"
def pick_word(probabilities, int_to_vocab, top_n=5):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
#print('Num probs: {}'.format(len(probabilities)))
top_n_choices = []
for i in range(min(len(probabilities), top_n)):
max_idx = np.argmax(probabilities)
top_n_choices.append((max_idx, probabilities[max_idx]))
probabilities.itemset(max_idx, 0)
#print('Top {} highest indexes: {}'.format(top_n, top_n_choices))
word_idx = weighted_choice(top_n_choices)
word = int_to_vocab[word_idx]
#print('Chosen word: {} (idx: {})'.format(word_idx, word))
return word
#highest_prob_idx = np.squeeze(np.argwhere(probabilities == np.max(probabilities)))
#word_idx = np.random.choice(highest_prob_idx)
#word = int_to_vocab[word_idx]
#return word
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 400
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'bart_simpson'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation |
13,950 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Poisson Processes
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License
Step1: This chapter introduces the Poisson process, which is a model used to describe events that occur at random intervals.
As an example of a Poisson process, we'll model goal-scoring in soccer, which is American English for the game everyone else calls "football".
We'll use goals scored in a game to estimate the parameter of a Poisson process; then we'll use the posterior distribution to make predictions.
And we'll solve The World Cup Problem.
The World Cup Problem
In the 2018 FIFA World Cup final, France defeated Croatia 4 goals to 2. Based on this outcome
Step2: The result is an object that represents a "frozen" random variable and provides pmf, which evaluates the probability mass function of the Poisson distribution.
Step4: This result implies that if the average goal-scoring rate is 1.4 goals per game, the probability of scoring 4 goals in a game is about 4%.
We'll use the following function to make a Pmf that represents a Poisson distribution.
Step5: make_poisson_pmf takes as parameters the goal-scoring rate, lam, and an array of quantities, qs, where it should evaluate the Poisson PMF. It returns a Pmf object.
For example, here's the distribution of goals scored for lam=1.4, computed for values of k from 0 to 9.
Step6: And here's what it looks like.
Step7: The most likely outcomes are 0, 1, and 2; higher values are possible but increasingly unlikely.
Values above 7 are negligible.
This distribution shows that if we know the goal scoring rate, we can predict the number of goals.
Now let's turn it around
Step8: The parameter, alpha, is the mean of the distribution.
The qs are possible values of lam between 0 and 10.
The ps are probability densities, which we can think of as unnormalized probabilities.
To normalize them, we can put them in a Pmf and call normalize
Step9: The result is a discrete approximation of a gamma distribution.
Here's what it looks like.
Step10: This distribution represents our prior knowledge about goal scoring
Step11: As usual, reasonable people could disagree about the details of the prior, but this is good enough to get started. Let's do an update.
The Update
Suppose you are given the goal-scoring rate, $\lambda$, and asked to compute the probability of scoring a number of goals, $k$. That is precisely the question we answered by computing the Poisson PMF.
For example, if $\lambda$ is 1.4, the probability of scoring 4 goals in a game is
Step12: Now suppose we are have an array of possible values for $\lambda$; we can compute the likelihood of the data for each hypothetical value of lam, like this
Step14: And that's all we need to do the update.
To get the posterior distribution, we multiply the prior by the likelihoods we just computed and normalize the result.
The following function encapsulates these steps.
Step15: The first parameter is the prior; the second is the number of goals.
In the example, France scored 4 goals, so I'll make a copy of the prior and update it with the data.
Step16: Here's what the posterior distribution looks like, along with the prior.
Step17: The data, k=4, makes us think higher values of lam are more likely and lower values are less likely. So the posterior distribution is shifted to the right.
Let's do the same for Croatia
Step18: And here are the results.
Step19: Here are the posterior means for these distributions.
Step21: The mean of the prior distribution is about 1.4.
After Croatia scores 2 goals, their posterior mean is 1.7, which is near the midpoint of the prior and the data.
Likewise after France scores 4 goals, their posterior mean is 2.7.
These results are typical of a Bayesian update
Step22: This is similar to the method we use in <<_Addends>> to compute the distribution of a sum.
Here's how we use it
Step23: Pmf provides a function that does the same thing.
Step24: The results are slightly different because Pmf.prob_gt uses array operators rather than for loops.
Either way, the result is close to 75%. So, on the basis of one game, we have moderate confidence that France is actually the better team.
Of course, we should remember that this result is based on the assumption that the goal-scoring rate is constant.
In reality, if a team is down by one goal, they might play more aggressively toward the end of the game, making them more likely to score, but also more likely to give up an additional goal.
As always, the results are only as good as the model.
Predicting the Rematch
Now we can take on the second question
Step25: The following figure shows what these distributions look like for a few values of lam.
Step26: The predictive distribution is a mixture of these Pmf objects, weighted with the posterior probabilities.
We can use make_mixture from <<_GeneralMixtures>> to compute this mixture.
Step27: Here's the predictive distribution for the number of goals France would score in a rematch.
Step28: This distribution represents two sources of uncertainty
Step29: We can use these distributions to compute the probability that France wins, loses, or ties the rematch.
Step30: Assuming that France wins half of the ties, their chance of winning the rematch is about 65%.
Step32: This is a bit lower than their probability of superiority, which is 75%. And that makes sense, because we are less certain about the outcome of a single game than we are about the goal-scoring rates.
Even if France is the better team, they might lose the game.
The Exponential Distribution
As an exercise at the end of this notebook, you'll have a chance to work on the following variation on the World Cup Problem
Step33: To see what the exponential distribution looks like, let's assume again that lam is 1.4; we can compute the distribution of $t$ like this
Step34: And here's what it looks like
Step35: It is counterintuitive, but true, that the most likely time to score a goal is immediately. After that, the probability of each successive interval is a little lower.
With a goal-scoring rate of 1.4, it is possible that a team will take more than one game to score a goal, but it is unlikely that they will take more than two games.
Summary
This chapter introduces three new distributions, so it can be hard to keep them straight.
Let's review
Step37: Exercise
Step38: Exercise | Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print('Downloaded ' + local)
download('https://github.com/AllenDowney/ThinkBayes2/raw/master/soln/utils.py')
from utils import set_pyplot_params
set_pyplot_params()
Explanation: Poisson Processes
Think Bayes, Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
from scipy.stats import poisson
lam = 1.4
dist = poisson(lam)
type(dist)
Explanation: This chapter introduces the Poisson process, which is a model used to describe events that occur at random intervals.
As an example of a Poisson process, we'll model goal-scoring in soccer, which is American English for the game everyone else calls "football".
We'll use goals scored in a game to estimate the parameter of a Poisson process; then we'll use the posterior distribution to make predictions.
And we'll solve The World Cup Problem.
The World Cup Problem
In the 2018 FIFA World Cup final, France defeated Croatia 4 goals to 2. Based on this outcome:
How confident should we be that France is the better team?
If the same teams played again, what is the chance France would win again?
To answer these questions, we have to make some modeling decisions.
First, I'll assume that for any team against another team there is some unknown goal-scoring rate, measured in goals per game, which I'll denote with the Python variable lam or the Greek letter $\lambda$, pronounced "lambda".
Second, I'll assume that a goal is equally likely during any minute of a game. So, in a 90 minute game, the probability of scoring during any minute is $\lambda/90$.
Third, I'll assume that a team never scores twice during the same minute.
Of course, none of these assumptions is completely true in the real world, but I think they are reasonable simplifications.
As George Box said, "All models are wrong; some are useful."
(https://en.wikipedia.org/wiki/All_models_are_wrong).
In this case, the model is useful because if these assumptions are
true, at least roughly, the number of goals scored in a game follows a Poisson distribution, at least roughly.
The Poisson Distribution
If the number of goals scored in a game follows a Poisson distribution with a goal-scoring rate, $\lambda$, the probability of scoring $k$ goals is
$$\lambda^k \exp(-\lambda) ~/~ k!$$
for any non-negative value of $k$.
SciPy provides a poisson object that represents a Poisson distribution.
We can create one with $\lambda=1.4$ like this:
End of explanation
k = 4
dist.pmf(k)
Explanation: The result is an object that represents a "frozen" random variable and provides pmf, which evaluates the probability mass function of the Poisson distribution.
End of explanation
from empiricaldist import Pmf
def make_poisson_pmf(lam, qs):
Make a Pmf of a Poisson distribution.
ps = poisson(lam).pmf(qs)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
Explanation: This result implies that if the average goal-scoring rate is 1.4 goals per game, the probability of scoring 4 goals in a game is about 4%.
We'll use the following function to make a Pmf that represents a Poisson distribution.
End of explanation
import numpy as np
lam = 1.4
goals = np.arange(10)
pmf_goals = make_poisson_pmf(lam, goals)
Explanation: make_poisson_pmf takes as parameters the goal-scoring rate, lam, and an array of quantities, qs, where it should evaluate the Poisson PMF. It returns a Pmf object.
For example, here's the distribution of goals scored for lam=1.4, computed for values of k from 0 to 9.
End of explanation
from utils import decorate
def decorate_goals(title=''):
decorate(xlabel='Number of goals',
ylabel='PMF',
title=title)
pmf_goals.bar(label=r'Poisson distribution with $\lambda=1.4$')
decorate_goals('Distribution of goals scored')
Explanation: And here's what it looks like.
End of explanation
from scipy.stats import gamma
alpha = 1.4
qs = np.linspace(0, 10, 101)
ps = gamma(alpha).pdf(qs)
Explanation: The most likely outcomes are 0, 1, and 2; higher values are possible but increasingly unlikely.
Values above 7 are negligible.
This distribution shows that if we know the goal scoring rate, we can predict the number of goals.
Now let's turn it around: given a number of goals, what can we say about the goal-scoring rate?
To answer that, we need to think about the prior distribution of lam, which represents the range of possible values and their probabilities before we see the score.
The Gamma Distribution
If you have ever seen a soccer game, you have some information about lam. In most games, teams score a few goals each. In rare cases, a team might score more than 5 goals, but they almost never score more than 10.
Using data from previous World Cups, I estimate that each team scores about 1.4 goals per game, on average. So I'll set the mean of lam to be 1.4.
For a good team against a bad one, we expect lam to be higher; for a bad team against a good one, we expect it to be lower.
To model the distribution of goal-scoring rates, I'll use a gamma distribution, which I chose because:
The goal scoring rate is continuous and non-negative, and the gamma distribution is appropriate for this kind of quantity.
The gamma distribution has only one parameter, alpha, which is the mean. So it's easy to construct a gamma distribution with the mean we want.
As we'll see, the shape of the gamma distribution is a reasonable choice, given what we know about soccer.
And there's one more reason, which I will reveal in <<_ConjugatePriors>>.
SciPy provides gamma, which creates an object that represents a gamma distribution.
And the gamma object provides provides pdf, which evaluates the probability density function (PDF) of the gamma distribution.
Here's how we use it.
End of explanation
from empiricaldist import Pmf
prior = Pmf(ps, qs)
prior.normalize()
Explanation: The parameter, alpha, is the mean of the distribution.
The qs are possible values of lam between 0 and 10.
The ps are probability densities, which we can think of as unnormalized probabilities.
To normalize them, we can put them in a Pmf and call normalize:
End of explanation
def decorate_rate(title=''):
decorate(xlabel='Goal scoring rate (lam)',
ylabel='PMF',
title=title)
prior.plot(style='--', label='prior', color='C5')
decorate_rate(r'Prior distribution of $\lambda$')
Explanation: The result is a discrete approximation of a gamma distribution.
Here's what it looks like.
End of explanation
prior.mean()
Explanation: This distribution represents our prior knowledge about goal scoring: lam is usually less than 2, occasionally as high as 6, and seldom higher than that.
And we can confirm that the mean is about 1.4.
End of explanation
lam = 1.4
k = 4
poisson(lam).pmf(4)
Explanation: As usual, reasonable people could disagree about the details of the prior, but this is good enough to get started. Let's do an update.
The Update
Suppose you are given the goal-scoring rate, $\lambda$, and asked to compute the probability of scoring a number of goals, $k$. That is precisely the question we answered by computing the Poisson PMF.
For example, if $\lambda$ is 1.4, the probability of scoring 4 goals in a game is:
End of explanation
lams = prior.qs
k = 4
likelihood = poisson(lams).pmf(k)
Explanation: Now suppose we are have an array of possible values for $\lambda$; we can compute the likelihood of the data for each hypothetical value of lam, like this:
End of explanation
def update_poisson(pmf, data):
Update Pmf with a Poisson likelihood.
k = data
lams = pmf.qs
likelihood = poisson(lams).pmf(k)
pmf *= likelihood
pmf.normalize()
Explanation: And that's all we need to do the update.
To get the posterior distribution, we multiply the prior by the likelihoods we just computed and normalize the result.
The following function encapsulates these steps.
End of explanation
france = prior.copy()
update_poisson(france, 4)
Explanation: The first parameter is the prior; the second is the number of goals.
In the example, France scored 4 goals, so I'll make a copy of the prior and update it with the data.
End of explanation
prior.plot(style='--', label='prior', color='C5')
france.plot(label='France posterior', color='C3')
decorate_rate('Posterior distribution for France')
Explanation: Here's what the posterior distribution looks like, along with the prior.
End of explanation
croatia = prior.copy()
update_poisson(croatia, 2)
Explanation: The data, k=4, makes us think higher values of lam are more likely and lower values are less likely. So the posterior distribution is shifted to the right.
Let's do the same for Croatia:
End of explanation
prior.plot(style='--', label='prior', color='C5')
croatia.plot(label='Croatia posterior', color='C0')
decorate_rate('Posterior distribution for Croatia')
Explanation: And here are the results.
End of explanation
print(croatia.mean(), france.mean())
Explanation: Here are the posterior means for these distributions.
End of explanation
def prob_gt(pmf1, pmf2):
Compute the probability of superiority.
total = 0
for q1, p1 in pmf1.items():
for q2, p2 in pmf2.items():
if q1 > q2:
total += p1 * p2
return total
Explanation: The mean of the prior distribution is about 1.4.
After Croatia scores 2 goals, their posterior mean is 1.7, which is near the midpoint of the prior and the data.
Likewise after France scores 4 goals, their posterior mean is 2.7.
These results are typical of a Bayesian update: the location of the posterior distribution is a compromise between the prior and the data.
Probability of Superiority
Now that we have a posterior distribution for each team, we can answer the first question: How confident should we be that France is the better team?
In the model, "better" means having a higher goal-scoring rate against the opponent. We can use the posterior distributions to compute the probability that a random value drawn from France's distribution exceeds a value drawn from Croatia's.
One way to do that is to enumerate all pairs of values from the two distributions, adding up the total probability that one value exceeds the other.
End of explanation
prob_gt(france, croatia)
Explanation: This is similar to the method we use in <<_Addends>> to compute the distribution of a sum.
Here's how we use it:
End of explanation
Pmf.prob_gt(france, croatia)
Explanation: Pmf provides a function that does the same thing.
End of explanation
pmf_seq = [make_poisson_pmf(lam, goals)
for lam in prior.qs]
Explanation: The results are slightly different because Pmf.prob_gt uses array operators rather than for loops.
Either way, the result is close to 75%. So, on the basis of one game, we have moderate confidence that France is actually the better team.
Of course, we should remember that this result is based on the assumption that the goal-scoring rate is constant.
In reality, if a team is down by one goal, they might play more aggressively toward the end of the game, making them more likely to score, but also more likely to give up an additional goal.
As always, the results are only as good as the model.
Predicting the Rematch
Now we can take on the second question: If the same teams played again, what is the chance Croatia would win?
To answer this question, we'll generate the "posterior predictive distribution", which is the number of goals we expect a team to score.
If we knew the goal scoring rate, lam, the distribution of goals would be a Poisson distribution with parameter lam.
Since we don't know lam, the distribution of goals is a mixture of a Poisson distributions with different values of lam.
First I'll generate a sequence of Pmf objects, one for each value of lam.
End of explanation
import matplotlib.pyplot as plt
for i, index in enumerate([10, 20, 30, 40]):
plt.subplot(2, 2, i+1)
lam = prior.qs[index]
pmf = pmf_seq[index]
pmf.bar(label=f'$\lambda$ = {lam}', color='C3')
decorate_goals()
Explanation: The following figure shows what these distributions look like for a few values of lam.
End of explanation
from utils import make_mixture
pred_france = make_mixture(france, pmf_seq)
Explanation: The predictive distribution is a mixture of these Pmf objects, weighted with the posterior probabilities.
We can use make_mixture from <<_GeneralMixtures>> to compute this mixture.
End of explanation
pred_france.bar(color='C3', label='France')
decorate_goals('Posterior predictive distribution')
Explanation: Here's the predictive distribution for the number of goals France would score in a rematch.
End of explanation
pred_croatia = make_mixture(croatia, pmf_seq)
pred_croatia.bar(color='C0', label='Croatia')
decorate_goals('Posterior predictive distribution')
Explanation: This distribution represents two sources of uncertainty: we don't know the actual value of lam, and even if we did, we would not know the number of goals in the next game.
Here's the predictive distribution for Croatia.
End of explanation
win = Pmf.prob_gt(pred_france, pred_croatia)
win
lose = Pmf.prob_lt(pred_france, pred_croatia)
lose
tie = Pmf.prob_eq(pred_france, pred_croatia)
tie
Explanation: We can use these distributions to compute the probability that France wins, loses, or ties the rematch.
End of explanation
win + tie/2
Explanation: Assuming that France wins half of the ties, their chance of winning the rematch is about 65%.
End of explanation
def expo_pdf(t, lam):
Compute the PDF of the exponential distribution.
return lam * np.exp(-lam * t)
Explanation: This is a bit lower than their probability of superiority, which is 75%. And that makes sense, because we are less certain about the outcome of a single game than we are about the goal-scoring rates.
Even if France is the better team, they might lose the game.
The Exponential Distribution
As an exercise at the end of this notebook, you'll have a chance to work on the following variation on the World Cup Problem:
In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?
In this version, notice that the data is not the number of goals in a fixed period of time, but the time between goals.
To compute the likelihood of data like this, we can take advantage of the theory of Poisson processes again. If each team has a constant goal-scoring rate, we expect the time between goals to follow an exponential distribution.
If the goal-scoring rate is $\lambda$, the probability of seeing an interval between goals of $t$ is proportional to the PDF of the exponential distribution:
$$\lambda \exp(-\lambda t)$$
Because $t$ is a continuous quantity, the value of this expression is not a probability; it is a probability density. However, it is proportional to the probability of the data, so we can use it as a likelihood in a Bayesian update.
SciPy provides expon, which creates an object that represents an exponential distribution.
However, it does not take lam as a parameter in the way you might expect, which makes it awkward to work with.
Since the PDF of the exponential distribution is so easy to evaluate, I'll use my own function.
End of explanation
lam = 1.4
qs = np.linspace(0, 4, 101)
ps = expo_pdf(qs, lam)
pmf_time = Pmf(ps, qs)
pmf_time.normalize()
Explanation: To see what the exponential distribution looks like, let's assume again that lam is 1.4; we can compute the distribution of $t$ like this:
End of explanation
def decorate_time(title=''):
decorate(xlabel='Time between goals (games)',
ylabel='PMF',
title=title)
pmf_time.plot(label='exponential with $\lambda$ = 1.4')
decorate_time('Distribution of time between goals')
Explanation: And here's what it looks like:
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: It is counterintuitive, but true, that the most likely time to score a goal is immediately. After that, the probability of each successive interval is a little lower.
With a goal-scoring rate of 1.4, it is possible that a team will take more than one game to score a goal, but it is unlikely that they will take more than two games.
Summary
This chapter introduces three new distributions, so it can be hard to keep them straight.
Let's review:
If a system satisfies the assumptions of a Poisson model, the number of events in a period of time follows a Poisson distribution, which is a discrete distribution with integer quantities from 0 to infinity. In practice, we can usually ignore low-probability quantities above a finite limit.
Also under the Poisson model, the interval between events follows an exponential distribution, which is a continuous distribution with quantities from 0 to infinity. Because it is continuous, it is described by a probability density function (PDF) rather than a probability mass function (PMF). But when we use an exponential distribution to compute the likelihood of the data, we can treat densities as unnormalized probabilities.
The Poisson and exponential distributions are parameterized by an event rate, denoted $\lambda$ or lam.
For the prior distribution of $\lambda$, I used a gamma distribution, which is a continuous distribution with quantities from 0 to infinity, but I approximated it with a discrete, bounded PMF. The gamma distribution has one parameter, denoted $\alpha$ or alpha, which is also its mean.
I chose the gamma distribution because the shape is consistent with our background knowledge about goal-scoring rates.
There are other distributions we could have used; however, we will see in <<_ConjugatePriors>> that the gamma distribution can be a particularly good choice.
But we have a few things to do before we get there, starting with these exercises.
Exercises
Exercise: Let's finish the exercise we started:
In the 2014 FIFA World Cup, Germany played Brazil in a semifinal match. Germany scored after 11 minutes and again at the 23 minute mark. At that point in the match, how many goals would you expect Germany to score after 90 minutes? What was the probability that they would score 5 more goals (as, in fact, they did)?
Here are the steps I recommend:
Starting with the same gamma prior we used in the previous problem, compute the likelihood of scoring a goal after 11 minutes for each possible value of lam. Don't forget to convert all times into games rather than minutes.
Compute the posterior distribution of lam for Germany after the first goal.
Compute the likelihood of scoring another goal after 12 more minutes and do another update. Plot the prior, posterior after one goal, and posterior after two goals.
Compute the posterior predictive distribution of goals Germany might score during the remaining time in the game, 90-23 minutes. Note: You will have to think about how to generate predicted goals for a fraction of a game.
Compute the probability of scoring 5 or more goals during the remaining time.
End of explanation
def make_expo_pmf(lam, high):
Make a PMF of an exponential distribution.
lam: event rate
high: upper bound on the interval `t`
returns: Pmf of the interval between events
qs = np.linspace(0, high, 101)
ps = expo_pdf(qs, lam)
pmf = Pmf(ps, qs)
pmf.normalize()
return pmf
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: Returning to the first version of the World Cup Problem. Suppose France and Croatia play a rematch. What is the probability that France scores first?
Hint: Compute the posterior predictive distribution for the time until the first goal by making a mixture of exponential distributions. You can use the following function to make a PMF that approximates an exponential distribution.
End of explanation
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
Explanation: Exercise: In the 2010-11 National Hockey League (NHL) Finals, my beloved Boston
Bruins played a best-of-seven championship series against the despised
Vancouver Canucks. Boston lost the first two games 0-1 and 2-3, then
won the next two games 8-1 and 4-0. At this point in the series, what
is the probability that Boston will win the next game, and what is
their probability of winning the championship?
To choose a prior distribution, I got some statistics from
http://www.nhl.com, specifically the average goals per game
for each team in the 2010-11 season. The distribution is well modeled by a gamma distribution with mean 2.8.
In what ways do you think the outcome of these games might violate the assumptions of the Poisson model? How would these violations affect your predictions?
End of explanation |
13,951 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1A.soft - Notions de SQL - correction
Correction des exercices du premier notebooks relié au SQL.
Step1: Recupérer les données
Step2: Exercice 1
Step3: Exercice 2
Step4: Exercice 3
Step5: Exercice 4
Step6: Zones de travail et zones de résidence
Step7: JOIN avec la table stations et les stations "travail"
On trouve les arrondissements où les stations de vélib sont les plus remplies en journée au centre de Paris. | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: 1A.soft - Notions de SQL - correction
Correction des exercices du premier notebooks relié au SQL.
End of explanation
import os
if not os.path.exists("td8_velib.db3"):
from pyensae.datasource import download_data
download_data("td8_velib.zip", website = 'xd')
from pyensae.sql import import_flatfile_into_database
dbf = "td8_velib.db3"
import_flatfile_into_database(dbf, "td8_velib.txt") # 2 secondes
import_flatfile_into_database(dbf, "stations.txt", table="stations") # 2 minutes
%load_ext pyensae
%SQL_connect td8_velib.db3
Explanation: Recupérer les données
End of explanation
%%SQL
SELECT COUNT(*) FROM (
SELECT DISTINCT last_update FROM td8_velib
) ;
%%SQL
SELECT MIN(last_update), MAX(last_update) FROM td8_velib ;
Explanation: Exercice 1
End of explanation
%%SQL
SELECT number, COUNT(*) AS nb
FROM td8_velib
WHERE available_bikes==0 AND last_update >= '2013-09-10 11:30:19'
GROUP BY number
ORDER BY nb DESC
Explanation: Exercice 2
End of explanation
%%SQL
SELECT nb, COUNT(*) AS nb_station
FROM (
-- requête de l'exercice précédent
SELECT number, COUNT(*) AS nb
FROM td8_velib
WHERE available_bikes==0 AND last_update >= '2013-09-10 11:30:19'
GROUP BY number
)
GROUP BY nb
Explanation: Exercice 3 : plage horaires de cinq minutes où il n'y a aucun vélo disponible
End of explanation
%%SQL
SELECT A.number, A.heure, A.minute, 1.0 * A.nb_velo / B.nb_velo_tot AS distribution_temporelle
FROM (
SELECT number, heure, minute, SUM(available_bikes) AS nb_velo
FROM td8_velib
WHERE last_update >= '2013-09-10 11:30:19'
GROUP BY heure, minute, number
) AS A
JOIN (
SELECT number, heure, minute, SUM(available_bikes) AS nb_velo_tot
FROM td8_velib
WHERE last_update >= '2013-09-10 11:30:19'
GROUP BY number
) AS B
ON A.number == B.number
--WHERE A.number in (8001, 8003, 15024, 15031) -- pour n'afficher que quelques stations
ORDER BY A.number, A.heure, A.minute
Explanation: Exercice 4 : distribution horaire par station et par tranche de 5 minutes
End of explanation
%%SQL --df=df
SELECT number, SUM(distribution_temporelle) AS velo_jour
FROM (
-- requête de l'exercice 4
SELECT A.number, A.heure, A.minute, 1.0 * A.nb_velo / B.nb_velo_tot AS distribution_temporelle
FROM (
SELECT number, heure, minute, SUM(available_bikes) AS nb_velo
FROM td8_velib
WHERE last_update >= '2013-09-10 11:30:19'
GROUP BY heure, minute, number
) AS A
JOIN (
SELECT number, heure, minute, SUM(available_bikes) AS nb_velo_tot
FROM td8_velib
WHERE last_update >= '2013-09-10 11:30:19'
GROUP BY number
) AS B
ON A.number == B.number
)
WHERE heure >= 10 AND heure <= 16
GROUP BY number
df = df.sort_values("velo_jour").reset_index()
df["index"] = range(0, df.shape[0])
df.head()
df.plot(x="index", y="velo_jour")
Explanation: Zones de travail et zones de résidence
End of explanation
%%SQL
SELECT C.number, name, lat, lng, velo_jour FROM
(
-- requête de la partie précédente
SELECT number, SUM(distribution_temporelle) AS velo_jour
FROM (
-- requête de l'exercice 4
SELECT A.number, A.heure, A.minute, 1.0 * A.nb_velo / B.nb_velo_tot AS distribution_temporelle
FROM (
SELECT number, heure, minute, SUM(available_bikes) AS nb_velo
FROM td8_velib
WHERE last_update >= '2013-09-10 11:30:19'
GROUP BY heure, minute, number
) AS A
JOIN (
SELECT number, heure, minute, SUM(available_bikes) AS nb_velo_tot
FROM td8_velib
WHERE last_update >= '2013-09-10 11:30:19'
GROUP BY number
) AS B
ON A.number == B.number
)
WHERE heure >= 10 AND heure <= 16
GROUP BY number
) AS C
INNER JOIN stations
ON C.number == stations.number
Explanation: JOIN avec la table stations et les stations "travail"
On trouve les arrondissements où les stations de vélib sont les plus remplies en journée au centre de Paris.
End of explanation |
13,952 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MIDAS Examples
If you're reading this you probably already know that MIDAS stands for Mixed Data Sampling, and it is a technique for creating time-series forecast models that allows you to mix series of different frequencies (ie, you can use monthly data as predictors for a quarterly series, or daily data as predictors for a monthly series, etc.). The general approach has been described in a series of papers by Ghysels, Santa-Clara, Valkanov and others.
This notebook attempts to recreate some of the examples from the paper Forecasting with Mixed Frequencies by Michelle T. Armesto, Kristie M. Engemann, and Michael T. Owyang.
Step1: MIDAS ADL
This package currently implements the MIDAS ADL (autoregressive distributed lag) method. We'll start with an example using quarterly GDP and monthly payroll data. We'll then show the basic steps in setting up and fitting this type of model, although in practice you'll probably used the top-level midas_adl function to do forecasts.
TODO
Step2: Figure 1
This is a variation of Figure 1 from the paper comparing year-over-year growth of GDP and employment.
Step3: Mixing Frequencies
The first step is to do the actual frequency mixing. In this case we're mixing monthly data (employment) with quarterly data (GDP). This may sometimes be useful to do directly, but again you'll probably used midas_adl to do forecasting.
Step4: The arguments here are as follows
Step5: You can also call forecast directly. This will use the optimization results returned from eatimate to produce a forecast for every date in the index of the forecast inputs (here xf and ylf)
Step6: Comparison against univariate ARIMA model
Step7: The midas_adl function
The midas_adl function wraps up frequency-mixing, fitting, and forecasting into one process. The default mode of forecasting is fixed, which means that the data between start_date and end_date will be used to fit the model, and then any data in the input beyond end_date will be used for forecasting. For example, here we're fitting from the beginning of 1985 to the end of 2008, but the gdp data extends to Q1 of 2011 so we get nine forecast points. Three monthly lags of the high-frequency data are specified along with one quarterly lag of GDP.
Step8: You can also change the polynomial used to weight the MIDAS coefficients. The default is 'beta', but you can also specify exponential Almom weighting ('expalmon') or beta with non-zero last term ('betann')
Step9: Rolling and Recursive Forecasting
As mentioned above the default forecasting method is fixed where the model is fit once and then all data after end_date is used for forecasting. Two other methods are supported rolling window and recursive. The rolling window method is just what it sounds like. The start_date and end_date are used for the initial window, and then each new forecast moves that window forward by one period so that you're always doing one step ahead forecasts. Of course, to do anything useful this also assumes that the date range of the dependent data extends beyond end_date accounting for the lags implied by horizon. Generally, you'll get lower RMSE values here since the forecasts are always one step ahead.
Step10: The recursive method is similar except that the start date does not change, so the range over which the fitting happens increases for each new forecast.
Step11: Nowcasting
Per the manual for the MatLab Matlab Toolbox Version 1.0, you can do nowcasting (or MIDAS with leads) basically by adjusting the horizon parameter. For example, below we change the horizon paremter to 1, we're now forecasting with a one month horizon rather than a one quarter horizon
Step12: Not surprisingly the RMSE drops considerably.
CPI vs. Federal Funds Rate
UNDER CONSTRUCTION | Python Code:
%matplotlib inline
import datetime
import numpy as np
import pandas as pd
from midas.mix import mix_freq
from midas.adl import estimate, forecast, midas_adl, rmse
Explanation: MIDAS Examples
If you're reading this you probably already know that MIDAS stands for Mixed Data Sampling, and it is a technique for creating time-series forecast models that allows you to mix series of different frequencies (ie, you can use monthly data as predictors for a quarterly series, or daily data as predictors for a monthly series, etc.). The general approach has been described in a series of papers by Ghysels, Santa-Clara, Valkanov and others.
This notebook attempts to recreate some of the examples from the paper Forecasting with Mixed Frequencies by Michelle T. Armesto, Kristie M. Engemann, and Michael T. Owyang.
End of explanation
gdp = pd.read_csv('../tests/data/gdp.csv', parse_dates=['DATE'], index_col='DATE')
pay = pd.read_csv('../tests/data/pay.csv', parse_dates=['DATE'], index_col='DATE')
gdp.tail()
pay.tail()
Explanation: MIDAS ADL
This package currently implements the MIDAS ADL (autoregressive distributed lag) method. We'll start with an example using quarterly GDP and monthly payroll data. We'll then show the basic steps in setting up and fitting this type of model, although in practice you'll probably used the top-level midas_adl function to do forecasts.
TODO: MIDAS equation and discussion
Example 1: GDP vs Non-Farm Payroll
End of explanation
gdp_yoy = ((1. + (np.log(gdp.GDP) - np.log(gdp.GDP.shift(3)))) ** 4) - 1.
emp_yoy = ((1. + (np.log(pay.PAY) - np.log(pay.PAY.shift(1)))) ** 12) - 1.
df = pd.concat([gdp_yoy, emp_yoy], axis=1)
df.columns = ['gdp_yoy', 'emp_yoy']
df[['gdp_yoy','emp_yoy']].loc['1980-1-1':].plot(figsize=(15,4), style=['o','-'])
Explanation: Figure 1
This is a variation of Figure 1 from the paper comparing year-over-year growth of GDP and employment.
End of explanation
gdp['gdp_growth'] = (np.log(gdp.GDP) - np.log(gdp.GDP.shift(1))) * 100.
pay['emp_growth'] = (np.log(pay.PAY) - np.log(pay.PAY.shift(1))) * 100.
y, yl, x, yf, ylf, xf = mix_freq(gdp.gdp_growth, pay.emp_growth, "3m", 1, 3,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1))
x.head()
Explanation: Mixing Frequencies
The first step is to do the actual frequency mixing. In this case we're mixing monthly data (employment) with quarterly data (GDP). This may sometimes be useful to do directly, but again you'll probably used midas_adl to do forecasting.
End of explanation
res = estimate(y, yl, x, poly='beta')
res.x
Explanation: The arguments here are as follows:
- First, the dependent (low frequency) and independent (high-frequency) data are given as Pandas series, and they are assumed to be indexed by date.
- xlag The number of lags for the high-frequency variable
- ylag The number of lags for the low-frequency variable (the autoregressive part)
- horizon: How much the high-frequency data is lagged before frequency mixing
- start_date, end_date: The start and end date over which the model is fitted. If these are outside the range of the low-frequency data, they will be adjusted
The horizon argument is a little tricky (the argument name was retained from the MatLab version). This is used both the align the data and to do nowcasting (more on that later). For example, if it's September 2017 then the latest GDP data from FRED will be for Q2 and this will be dated 2017-04-01. The latest monthly data from non-farm payroll will be for August, which will be dated 2017-08-01. If we aligned just on dates, the payroll data for April (04-01), March (03-01), and February(02-01) would be aligned with Q2 (since xlag = "3m"), but what we want is June, May, and April, so here the horizon argument is 3 indicating that the high-frequency data should be lagged three months before being mixed with the quarterly data.
Fitting the Model
Because of the form of the MIDAS model, fitting the model requires using non-linear least squares. For now, if you call the estimate function directly, you'll get back a results of type scipy.optimize.optimize.OptimizeResult
End of explanation
fc = forecast(xf, ylf, res, poly='beta')
forecast_df = fc.join(yf)
forecast_df['gap'] = forecast_df.yfh - forecast_df.gdp_growth
forecast_df
gdp.join(fc)[['gdp_growth','yfh']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
Explanation: You can also call forecast directly. This will use the optimization results returned from eatimate to produce a forecast for every date in the index of the forecast inputs (here xf and ylf):
End of explanation
import statsmodels.tsa.api as sm
m = sm.AR(gdp['1975-01-01':'2011-01-01'].gdp_growth,)
r = m.fit(maxlag=1)
r.params
fc_ar = r.predict(start='2005-01-01')
fc_ar.name = 'xx'
df_p = gdp.join(fc)[['gdp_growth','yfh']]
df_p.join(fc_ar)[['gdp_growth','yfh','xx']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
Explanation: Comparison against univariate ARIMA model
End of explanation
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3)
rmse_fc
Explanation: The midas_adl function
The midas_adl function wraps up frequency-mixing, fitting, and forecasting into one process. The default mode of forecasting is fixed, which means that the data between start_date and end_date will be used to fit the model, and then any data in the input beyond end_date will be used for forecasting. For example, here we're fitting from the beginning of 1985 to the end of 2008, but the gdp data extends to Q1 of 2011 so we get nine forecast points. Three monthly lags of the high-frequency data are specified along with one quarterly lag of GDP.
End of explanation
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
poly='expalmon')
rmse_fc
Explanation: You can also change the polynomial used to weight the MIDAS coefficients. The default is 'beta', but you can also specify exponential Almom weighting ('expalmon') or beta with non-zero last term ('betann')
End of explanation
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,10,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
forecast_horizon=h,
poly='beta',
method='rolling') for h in (1, 2, 5)}
results[1][0]
Explanation: Rolling and Recursive Forecasting
As mentioned above the default forecasting method is fixed where the model is fit once and then all data after end_date is used for forecasting. Two other methods are supported rolling window and recursive. The rolling window method is just what it sounds like. The start_date and end_date are used for the initial window, and then each new forecast moves that window forward by one period so that you're always doing one step ahead forecasts. Of course, to do anything useful this also assumes that the date range of the dependent data extends beyond end_date accounting for the lags implied by horizon. Generally, you'll get lower RMSE values here since the forecasts are always one step ahead.
End of explanation
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,10,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
forecast_horizon=h,
poly='beta',
method='recursive') for h in (1, 2, 5)}
results[1][0]
Explanation: The recursive method is similar except that the start date does not change, so the range over which the fitting happens increases for each new forecast.
End of explanation
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=1)
rmse_fc
Explanation: Nowcasting
Per the manual for the MatLab Matlab Toolbox Version 1.0, you can do nowcasting (or MIDAS with leads) basically by adjusting the horizon parameter. For example, below we change the horizon paremter to 1, we're now forecasting with a one month horizon rather than a one quarter horizon:
End of explanation
cpi = pd.read_csv('CPIAUCSL.csv', parse_dates=['DATE'], index_col='DATE')
ffr = pd.read_csv('DFF_2_Vintages_Starting_2009_09_28.txt', sep='\t', parse_dates=['observation_date'],
index_col='observation_date')
cpi.head()
ffr.head(10)
cpi_yoy = ((1. + (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1)))) ** 12) - 1.
cpi_yoy.head()
df = pd.concat([cpi_yoy, ffr.DFF_20090928 / 100.], axis=1)
df.columns = ['cpi_growth', 'dff']
df.loc['1980-1-1':'2010-1-1'].plot(figsize=(15,4), style=['-+','-.'])
cpi_growth = (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1))) * 100.
y, yl, x, yf, ylf, xf = mix_freq(cpi_growth, ffr.DFF_20090928, "1m", 1, 1,
start_date=datetime.datetime(1975,10,1),
end_date=datetime.datetime(1991,1,1))
x.head()
res = estimate(y, yl, x)
fc = forecast(xf, ylf, res)
fc.join(yf).head()
pd.concat([cpi_growth, fc],axis=1).loc['2008-01-01':'2010-01-01'].plot(style=['-o','-+'], figsize=(12, 4))
results = {h: midas_adl(cpi_growth, ffr.DFF_20090928,
start_date=datetime.datetime(1975,7,1),
end_date=datetime.datetime(1990,11,1),
xlag="1m",
ylag=1,
horizon=1,
forecast_horizon=h,
method='rolling') for h in (1, 2, 5)}
(results[1][0], results[2][0], results[5][0])
results[1][1].plot(figsize=(12,4))
results = {h: midas_adl(cpi_growth, ffr.DFF_20090928,
start_date=datetime.datetime(1975,10,1),
end_date=datetime.datetime(1991,1,1),
xlag="1m",
ylag=1,
horizon=1,
forecast_horizon=h,
method='recursive') for h in (1, 2, 5)}
results[1][0]
results[1][1].plot()
Explanation: Not surprisingly the RMSE drops considerably.
CPI vs. Federal Funds Rate
UNDER CONSTRUCTION: Note that these models take considerably longer to fit
End of explanation |
13,953 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination) | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)
import statsmodels.formula.api as smf
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
df = pd.read_csv("hanford.csv")
Explanation: 2. Read in the hanford.csv file
End of explanation
df.mean()
df.median()
iqr = df.quantile(q=0.75)- df.quantile(q=0.25)
iqr
UAL= (iqr*1.5) + df.quantile(q=0.75)
UAL
LAL= df.quantile(q=0.25) - (iqr*1.5)
LAL
Explanation: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
df
#fig, ax = plt.subplots()
ax= df.plot(kind='scatter', y='Exposure', x='Mortality', color='green', figsize= (7,5))
ax
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula="Mortality ~ Exposure",data=df).fit()
lm.params
def mortality_rate_calculator(exposure):
return (114.715631 + (9.231456 * float(exposure)))
df['predicted_mortality_rate'] = df['Exposure'].apply(mortality_rate_calculator)
df
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
intercept, slope = lm.params
#DONT KNOW WHAT HAPPENED HERE :S
df.plot(kind='scatter', y='Exposure', x='Mortality', color='green', figsize= (7,5))
plt.plot(df["Exposure"],slope *df["Exposure"]+ intercept,"-",color="red")
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation |
13,954 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step3: Вспомогательные функции
Step4: Тест для метода прогонки
source
Step5: source
Step6: Тесты для создания массивов
Step7: Создание класса модели
Ссылки
Step8: Тесты для 1 задачи | Python Code:
# Плотность источников тепла
def func(s, t):
#return 0.
return s + t * 4.
# Температура внешней среды
def p(t):
return math.cos(2 * t * math.pi)
#return t
def array(f, numval, numdh):
Создать N-мерный массив.
param: f - функция, которая приминает N аргументов.
param: numval - диапазоны значений параметров функции. Список
param: numdh - шаги для параметров. Список
def rec_for(f, numdim, numdh, current_l, l_i, arr):
Рекурсивный цикл.
param: f - функция, которая приминает N аргументов.
param: numdim - размерность выходной матрицы. Список
param: numdh - шаги для параметров. Список
param: current_l - текущая глубина рекурсии.
param: l_i - промежуточный список индексов. Список
param: arr - матрица, с которой мы работаем. np.array
for i in range(numdim[current_l]):
l_i.append(i)
if current_l < len(numdim) - 1:
rec_for(f, numdim, numdh, current_l + 1, l_i, arr)
else:
args = (np.array(l_i) * np.array(numdh))
arr[tuple(l_i)] = f(*args)
l_i.pop()
return arr
numdim = [int(numval[i] / numdh[i]) + 1 for i in range(len(numdh))]
arr = np.zeros(numdim)
arr = rec_for(f, numdim, numdh, 0, [], arr)
# Надо отобразить так x - j, y - i (для графиков), поэтому используем transpose
arr = np.transpose(arr)
return arr
def TDMA(a, b, c, f):
Метод прогонки.
param: a - левая поддиагональ.
param: b - правая поддиагональ.
param: c - центр.
param: f - правая часть.
#a, b, c, f = map(lambda k_list: map(float, k_list), (a, b, c, f))
alpha = [0]
beta = [0]
n = len(f)
x = [0] * n
for i in range(n - 1):
alpha.append(-b[i] / (a[i] * alpha[i] + c[i]))
beta.append((f[i] - a[i] * beta[i]) / (a[i] * alpha[i] + c[i]))
x[n - 1] = (f[n - 1] - a[n - 1] * beta[n - 1]) / (c[n - 1] + a[n - 1] * alpha[n - 1])
for i in reversed(range(n - 1)):
x[i] = alpha[i + 1] * x[i + 1] + beta[i + 1]
return x
Explanation: Вспомогательные функции
End of explanation
a = [0, 1, 1, 1]
c = [2, 10, -5, 4]
b = [1, -5, 2, 0]
f = [-5, -18, -40, -27]
x = TDMA(a, b, c, f)
x
Explanation: Тест для метода прогонки
source: http://old.exponenta.ru/educat/class/courses/vvm/theme_5/example.asp
Ответ: (-3, 1, 5, -8)
End of explanation
a = [0, -3, -5, -6, -5]
c = [2, 8, 12, 18, 10]
b = [-1, -1, 2, -4, 0]
f = [-25, 72, -69, -156, 20]
x = TDMA(a, b, c, f)
x
Explanation: source: http://kontromat.ru/?page_id=4980 (Ответ там неверный, знак минус у 5 пропущен)
Ответ: (-10, 5, -2, -10)
End of explanation
X_ = np.arange(0., 1.01, .1)
Y_ = np.arange(0., 2.01, .01)
#print(np.shape(X_))
X_, Y_ = np.meshgrid(X_, Y_)
print(np.shape(X_), np.shape(Y_))
X_
Y_
arr = array(func, [1., 2.], [.1, .01])
print(np.shape(arr))
arr
Z = arr
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X_, Y_, Z, color='r')
plt.xlabel('s')
plt.ylabel('t')
plt.show()
arr = array(p, [1.], [.001])
arr
Explanation: Тесты для создания массивов
End of explanation
# Класс модели для Л.Р №1
class Lab1OptCtrlModel():
def __init__(self, p_d):
self.a, self.l, self.v, self.T = p_d['a'], p_d['l'], p_d['v'], p_d['T']
self.p, self.f = p_d['p(t)'], p_d['f(s, t)']
self.p_min, self.p_max, self.R = p_d['p_min'], p_d['p_max'], p_d['R']
self.fi, self.y = p_d['fi(s)'], p_d['y(s)']
self.dh, self.dt = p_d['dh'], p_d['dt']
self.N, self.M = p_d['N'], p_d['M']
self.p_arr = []
self.p_arr.append(array(self.p, [p_d['T']], [p_d['dt']]))
self.f_arr = array(f, [p_d['l'], p_d['T']], [p_d['dh'], p_d['dt']])
self.x_arr = []
self.x_arr.append(array(self.f, [p_d['l'], p_d['T']], [p_d['dh'], p_d['dt']]))
self.x_arr[0][0,:] = array(self.fi, [p_d['l']], [p_d['dh']])
def Solve(self, eps=10**-5):
# Число уравнений
eq_l = self.N - 1
# Инициализация элементов для метода прогонки, которые постоянны
a, b, c = [0. for i in range(eq_l)], [0. for i in range(eq_l)], [0. for i in range(eq_l)]
f = [0. for i in range(eq_l)]
a2_dt_dh2 = self.a ** 2 * self.dt / self.dh ** 2
buf = 1. / (3. + 2. * self.dh * self.v)
# a
a[1:-1] = [a2_dt_dh2 for i in range(1, eq_l - 1)]
# Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию
a[-1] = a2_dt_dh2 * (1. - buf)
# b
# Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию
b[0] = 2. / 3. * a2_dt_dh2
b[1:-1] = [a2_dt_dh2 for i in range(1, eq_l - 1)]
# c
# Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию
c[0] = -2. / 3. * a2_dt_dh2 - 1.
c[1:-1] = [-1. - 2. * a2_dt_dh2 for i in range(1, eq_l - 1)]
# Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию
c[-1] = -1. + a2_dt_dh2 * (4. * buf - 2.)
ind = 0
# Решаем 1 задачу
for j in range(0, self.M):
# f
f[0:-1] = [-self.x_arr[ind][j, i] - self.dt * self.f_arr[j, i] for i in range(1, eq_l)]
# Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию
f[-1] = -self.x_arr[ind][j, -2] - self.dt * self.f_arr[j, -2]
f[-1] += -a2_dt_dh2 * 2. * self.dh * self.v * buf * self.p_arr[ind][j + 1]
# Решаем задачу
self.x_arr[ind][j + 1,1:1 + eq_l] = TDMA(a, b, c, f)
# Вычисляем первый и последний элементы
# Эта часть зависит от апроксимации, которую мы используем, поэтому стоит ввести функцию
self.x_arr[ind][j + 1, 0] = 4. / 3. * self.x_arr[ind][j + 1, 1] - 1. / 3. * self.x_arr[ind][j + 1, 2]
self.x_arr[ind][j + 1, -1] = 4 * buf * self.x_arr[ind][j + 1, -2]
self.x_arr[ind][j + 1, -1] -= buf * self.x_arr[ind][j + 1, -3]
self.x_arr[ind][j + 1, -1] += 2. * self.dh * self.v * buf * self.p_arr[ind][j + 1]
return self.x_arr[ind]
Explanation: Создание класса модели
Ссылки:
Трехточечные производные
End of explanation
# Словарь параметров
p_d = {}
# Заданные положительные величины
p_d['a'], p_d['l'], p_d['v'], p_d['T'] = 10., 3., 4., 20.
# Решение тестового примера
def x(s, t):
return math.sin(t) + math.sin(s + math.pi / 2)
# Плотность источников тепла
def f(s, t):
return math.cos(t) + p_d['a'] ** 2 * math.sin(s + math.pi / 2)
# Температура внешней среды
def p(t):
return 1. / p_d['v'] * math.cos(p_d['l'] + math.pi / 2) + math.sin(t) + math.sin(p_d['l'] + math.pi / 2)
# Распределение температуры в начальный момент времени
def fi(s):
return math.sin(s + math.pi / 2)
p_d['p(t)'] = p
p_d['f(s, t)'] = f
# Заданные числа
p_d['p_min'], p_d['p_max'], p_d['R'] = -10., 10., 100.
p_d['fi(s)'] = fi
# Желаемое распределение температуры
def y(s):
return s
p_d['y(s)'] = y
# Число точек на пространственной и временной сетке соответственно
p_d['N'], p_d['M'] = 10, 100
# Шаг на пространственной и временной сетке соответственно
p_d['dh'], p_d['dt'] = p_d['l'] / p_d['N'], p_d['T'] / p_d['M']
p_d['l'], p_d['T'], p_d['dh'], p_d['dt']
X_ = np.arange(0., p_d['l'] + p_d['dh'], p_d['dh'])
Y_ = np.arange(0., p_d['T'] + p_d['dt'], p_d['dt'])
X_, Y_ = np.meshgrid(X_, Y_)
print(np.shape(X_), np.shape(Y_))
model = Lab1OptCtrlModel(p_d)
Z = model.x_arr[0]
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(X_, Y_, Z)
plt.xlabel('s')
plt.ylabel('t')
plt.show()
x_arr = model.Solve()
x_arr_1 = array(x, [p_d['l'], p_d['T']], [p_d['dh'], p_d['dt']])
abs(x_arr - x_arr_1)
np.max(abs(x_arr - x_arr_1))
Z = x_arr_1
fig = plt.figure()
ax = fig.add_subplot(211, projection='3d')
ax.plot_surface(X_, Y_, Z, color='b')
Z = x_arr
ax.plot_surface(X_, Y_, Z, color='r')
plt.xlabel('s')
plt.ylabel('t')
plt.show()
Z = abs(x_arr - x_arr_1)
fig = plt.figure()
ax = fig.add_subplot(211, projection='3d')
ax.plot_surface(X_, Y_, Z, color='b')
plt.xlabel('s')
plt.ylabel('t')
plt.show()
Explanation: Тесты для 1 задачи
End of explanation |
13,955 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This Notebook illustrates the usage of OpenMC's multi-group calculational mode with the Python API. This example notebook creates and executes the 2-D C5G7 benchmark model using the openmc.MGXSLibrary class to create the supporting data library on the fly.
Generate MGXS Library
Step1: We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark.
This notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy).
Note
Step2: We will now add the scattering matrix data.
Note
Step3: Now that the UO2 data has been created, we can move on to the remaining materials using the same process.
However, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem.
For now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk.
Step4: Generate 2-D C5G7 Problem Input Files
To build the actual 2-D model, we will first begin by creating the materials.xml file.
First we need to define materials that will be used in the problem. In other notebooks, either openmc.Nuclides or openmc.Elements were created at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use openmc.Macroscopic objects to in-place of openmc.Nuclide or openmc.Element objects.
openmc.Macroscopic, unlike openmc.Nuclide and openmc.Element objects, do not need to be provided enough information to calculate number densities, as no number densities are needed.
When assigning openmc.Macroscopic objects to openmc.Material objects, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when an openmc.Macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required.
Aside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks.
Step5: Now we can go ahead and produce a materials.xml file for use by OpenMC
Step6: Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file.
We will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers.
Step7: The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types
Step8: Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined.
After that we can create the final cell to contain the entire core.
Step9: Before we commit to the geometry, we should view it using the Python API's plotting capability
Step10: OK, it looks pretty good, let's go ahead and write the file
Step11: We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin.
Step12: With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain.
This will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more!
Step13: Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood.
Step14: Results Visualization
Now that we have run the simulation, let's look at the fission rate and flux tallies that we tallied. | Python Code:
import os
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
import openmc
%matplotlib inline
Explanation: This Notebook illustrates the usage of OpenMC's multi-group calculational mode with the Python API. This example notebook creates and executes the 2-D C5G7 benchmark model using the openmc.MGXSLibrary class to create the supporting data library on the fly.
Generate MGXS Library
End of explanation
# Create a 7-group structure with arbitrary boundaries (the specific boundaries are unimportant)
groups = openmc.mgxs.EnergyGroups(np.logspace(-5, 7, 8))
uo2_xsdata = openmc.XSdata('uo2', groups)
uo2_xsdata.order = 0
# When setting the data let the object know you are setting the data for a temperature of 294K.
uo2_xsdata.set_total([1.77949E-1, 3.29805E-1, 4.80388E-1, 5.54367E-1,
3.11801E-1, 3.95168E-1, 5.64406E-1], temperature=294.)
uo2_xsdata.set_absorption([8.0248E-03, 3.7174E-3, 2.6769E-2, 9.6236E-2,
3.0020E-02, 1.1126E-1, 2.8278E-1], temperature=294.)
uo2_xsdata.set_fission([7.21206E-3, 8.19301E-4, 6.45320E-3, 1.85648E-2,
1.78084E-2, 8.30348E-2, 2.16004E-1], temperature=294.)
uo2_xsdata.set_nu_fission([2.005998E-2, 2.027303E-3, 1.570599E-2, 4.518301E-2,
4.334208E-2, 2.020901E-1, 5.257105E-1], temperature=294.)
uo2_xsdata.set_chi([5.87910E-1, 4.11760E-1, 3.39060E-4, 1.17610E-7,
0.00000E-0, 0.00000E-0, 0.00000E-0], temperature=294.)
Explanation: We will now create the multi-group library using data directly from Appendix A of the C5G7 benchmark documentation. All of the data below will be created at 294K, consistent with the benchmark.
This notebook will first begin by setting the group structure and building the groupwise data for UO2. As you can see, the cross sections are input in the order of increasing groups (or decreasing energy).
Note: The C5G7 benchmark uses transport-corrected cross sections. So the total cross section we input here will technically be the transport cross section.
End of explanation
# The scattering matrix is ordered with incoming groups as rows and outgoing groups as columns
# (i.e., below the diagonal is up-scattering).
scatter_matrix = \
[[[1.27537E-1, 4.23780E-2, 9.43740E-6, 5.51630E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 3.24456E-1, 1.63140E-3, 3.14270E-9, 0.00000E-0, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 0.00000E-0, 4.50940E-1, 2.67920E-3, 0.00000E-0, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 4.52565E-1, 5.56640E-3, 0.00000E-0, 0.00000E-0],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 1.25250E-4, 2.71401E-1, 1.02550E-2, 1.00210E-8],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 1.29680E-3, 2.65802E-1, 1.68090E-2],
[0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 0.00000E-0, 8.54580E-3, 2.73080E-1]]]
scatter_matrix = np.array(scatter_matrix)
scatter_matrix = np.rollaxis(scatter_matrix, 0, 3)
uo2_xsdata.set_scatter_matrix(scatter_matrix, temperature=294.)
Explanation: We will now add the scattering matrix data.
Note: Most users familiar with deterministic transport libraries are already familiar with the idea of entering one scattering matrix for every order (i.e. scattering order as the outer dimension). However, the shape of OpenMC's scattering matrix entry is instead [Incoming groups, Outgoing Groups, Scattering Order] to best enable other scattering representations. We will follow the more familiar approach in this notebook, and then use numpy's numpy.rollaxis function to change the ordering to what we need (scattering order on the inner dimension).
End of explanation
# Initialize the library
mg_cross_sections_file = openmc.MGXSLibrary(groups)
# Add the UO2 data to it
mg_cross_sections_file.add_xsdata(uo2_xsdata)
# And write to disk
mg_cross_sections_file.export_to_hdf5('mgxs.h5')
Explanation: Now that the UO2 data has been created, we can move on to the remaining materials using the same process.
However, we will actually skip repeating the above for now. Our simulation will instead use the c5g7.h5 file that has already been created using exactly the same logic as above, but for the remaining materials in the benchmark problem.
For now we will show how you would use the uo2_xsdata information to create an openmc.MGXSLibrary object and write to disk.
End of explanation
# For every cross section data set in the library, assign an openmc.Macroscopic object to a material
materials = {}
for xs in ['uo2', 'mox43', 'mox7', 'mox87', 'fiss_chamber', 'guide_tube', 'water']:
materials[xs] = openmc.Material(name=xs)
materials[xs].set_density('macro', 1.)
materials[xs].add_macroscopic(xs)
Explanation: Generate 2-D C5G7 Problem Input Files
To build the actual 2-D model, we will first begin by creating the materials.xml file.
First we need to define materials that will be used in the problem. In other notebooks, either openmc.Nuclides or openmc.Elements were created at the equivalent stage. We can do that in multi-group mode as well. However, multi-group cross-sections are sometimes provided as macroscopic cross-sections; the C5G7 benchmark data are macroscopic. In this case, we can instead use openmc.Macroscopic objects to in-place of openmc.Nuclide or openmc.Element objects.
openmc.Macroscopic, unlike openmc.Nuclide and openmc.Element objects, do not need to be provided enough information to calculate number densities, as no number densities are needed.
When assigning openmc.Macroscopic objects to openmc.Material objects, the density can still be scaled by setting the density to a value that is not 1.0. This would be useful, for example, when slightly perturbing the density of water due to a small change in temperature (while of course ignoring any resultant spectral shift). The density of a macroscopic dataset is set to 1.0 in the openmc.Material object by default when an openmc.Macroscopic dataset is used; so we will show its use the first time and then afterwards it will not be required.
Aside from these differences, the following code is very similar to similar code in other OpenMC example Notebooks.
End of explanation
# Instantiate a Materials collection, register all Materials, and export to XML
materials_file = openmc.Materials(materials.values())
# Set the location of the cross sections file to our pre-written set
materials_file.cross_sections = 'c5g7.h5'
materials_file.export_to_xml()
Explanation: Now we can go ahead and produce a materials.xml file for use by OpenMC
End of explanation
# Create the surface used for each pin
pin_surf = openmc.ZCylinder(x0=0, y0=0, R=0.54, name='pin_surf')
# Create the cells which will be used to represent each pin type.
cells = {}
universes = {}
for material in materials.values():
# Create the cell for the material inside the cladding
cells[material.name] = openmc.Cell(name=material.name)
# Assign the half-spaces to the cell
cells[material.name].region = -pin_surf
# Register the material with this cell
cells[material.name].fill = material
# Repeat the above for the material outside the cladding (i.e., the moderator)
cell_name = material.name + '_moderator'
cells[cell_name] = openmc.Cell(name=cell_name)
cells[cell_name].region = +pin_surf
cells[cell_name].fill = materials['water']
# Finally add the two cells we just made to a Universe object
universes[material.name] = openmc.Universe(name=material.name)
universes[material.name].add_cells([cells[material.name], cells[cell_name]])
Explanation: Our next step will be to create the geometry information needed for our assembly and to write that to the geometry.xml file.
We will begin by defining the surfaces, cells, and universes needed for each of the individual fuel pins, guide tubes, and fission chambers.
End of explanation
lattices = {}
# Instantiate the UO2 Lattice
lattices['UO2 Assembly'] = openmc.RectLattice(name='UO2 Assembly')
lattices['UO2 Assembly'].dimension = [17, 17]
lattices['UO2 Assembly'].lower_left = [-10.71, -10.71]
lattices['UO2 Assembly'].pitch = [1.26, 1.26]
u = universes['uo2']
g = universes['guide_tube']
f = universes['fiss_chamber']
lattices['UO2 Assembly'].universes = \
[[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],
[u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, g, u, u, g, u, u, f, u, u, g, u, u, g, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, g, u, u, g, u, u, g, u, u, g, u, u, g, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, g, u, u, u, u, u, u, u, u, u, g, u, u, u],
[u, u, u, u, u, g, u, u, g, u, u, g, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u],
[u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u, u]]
# Create a containing cell and universe
cells['UO2 Assembly'] = openmc.Cell(name='UO2 Assembly')
cells['UO2 Assembly'].fill = lattices['UO2 Assembly']
universes['UO2 Assembly'] = openmc.Universe(name='UO2 Assembly')
universes['UO2 Assembly'].add_cell(cells['UO2 Assembly'])
# Instantiate the MOX Lattice
lattices['MOX Assembly'] = openmc.RectLattice(name='MOX Assembly')
lattices['MOX Assembly'].dimension = [17, 17]
lattices['MOX Assembly'].lower_left = [-10.71, -10.71]
lattices['MOX Assembly'].pitch = [1.26, 1.26]
m = universes['mox43']
n = universes['mox7']
o = universes['mox87']
g = universes['guide_tube']
f = universes['fiss_chamber']
lattices['MOX Assembly'].universes = \
[[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m],
[m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],
[m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],
[m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],
[m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],
[m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, g, o, o, g, o, o, f, o, o, g, o, o, g, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, n, o, o, o, o, o, o, o, o, o, o, o, n, n, m],
[m, n, g, o, o, g, o, o, g, o, o, g, o, o, g, n, m],
[m, n, n, n, o, o, o, o, o, o, o, o, o, n, n, n, m],
[m, n, n, g, n, o, o, o, o, o, o, o, n, g, n, n, m],
[m, n, n, n, n, g, n, n, g, n, n, g, n, n, n, n, m],
[m, n, n, n, n, n, n, n, n, n, n, n, n, n, n, n, m],
[m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m, m]]
# Create a containing cell and universe
cells['MOX Assembly'] = openmc.Cell(name='MOX Assembly')
cells['MOX Assembly'].fill = lattices['MOX Assembly']
universes['MOX Assembly'] = openmc.Universe(name='MOX Assembly')
universes['MOX Assembly'].add_cell(cells['MOX Assembly'])
# Instantiate the reflector Lattice
lattices['Reflector Assembly'] = openmc.RectLattice(name='Reflector Assembly')
lattices['Reflector Assembly'].dimension = [1,1]
lattices['Reflector Assembly'].lower_left = [-10.71, -10.71]
lattices['Reflector Assembly'].pitch = [21.42, 21.42]
lattices['Reflector Assembly'].universes = [[universes['water']]]
# Create a containing cell and universe
cells['Reflector Assembly'] = openmc.Cell(name='Reflector Assembly')
cells['Reflector Assembly'].fill = lattices['Reflector Assembly']
universes['Reflector Assembly'] = openmc.Universe(name='Reflector Assembly')
universes['Reflector Assembly'].add_cell(cells['Reflector Assembly'])
Explanation: The next step is to take our universes (representing the different pin types) and lay them out in a lattice to represent the assembly types
End of explanation
lattices['Core'] = openmc.RectLattice(name='3x3 core lattice')
lattices['Core'].dimension= [3, 3]
lattices['Core'].lower_left = [-32.13, -32.13]
lattices['Core'].pitch = [21.42, 21.42]
r = universes['Reflector Assembly']
u = universes['UO2 Assembly']
m = universes['MOX Assembly']
lattices['Core'].universes = [[u, m, r],
[m, u, r],
[r, r, r]]
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-32.13, boundary_type='reflective')
max_x = openmc.XPlane(x0=+32.13, boundary_type='vacuum')
min_y = openmc.YPlane(y0=-32.13, boundary_type='vacuum')
max_y = openmc.YPlane(y0=+32.13, boundary_type='reflective')
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = lattices['Core']
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
Explanation: Let's now create the core layout in a 3x3 lattice where each lattice position is one of the assemblies we just defined.
After that we can create the final cell to contain the entire core.
End of explanation
root_universe.plot(center=(0., 0., 0.), width=(3 * 21.42, 3 * 21.42), pixels=(500, 500),
color_by='material')
Explanation: Before we commit to the geometry, we should view it using the Python API's plotting capability
End of explanation
# Create Geometry and set root Universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: OK, it looks pretty good, let's go ahead and write the file
End of explanation
tallies_file = openmc.Tallies()
# Instantiate a tally Mesh
mesh = openmc.Mesh()
mesh.type = 'regular'
mesh.dimension = [17 * 2, 17 * 2]
mesh.lower_left = [-32.13, -10.71]
mesh.upper_right = [+10.71, +32.13]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
Explanation: We can now create the tally file information. The tallies will be set up to give us the pin powers in this notebook. We will do this with a mesh filter, with one mesh cell per pin.
End of explanation
# OpenMC simulation parameters
batches = 150
inactive = 50
particles = 5000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
# Tell OpenMC this is a multi-group problem
settings_file.energy_mode = 'multi-group'
# Set the verbosity to 6 so we dont see output for every batch
settings_file.verbosity = 6
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-32.13, -10.71, -1e50, 10.71, 32.13, 1e50]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Tell OpenMC we want to run in eigenvalue mode
settings_file.run_mode = 'eigenvalue'
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters for the settings.xml file. Note the use of the energy_mode attribute of our settings_file object. This is used to tell OpenMC that we intend to run in multi-group mode instead of the default continuous-energy mode. If we didn't specify this but our cross sections file was not a continuous-energy data set, then OpenMC would complain.
This will be a relatively coarse calculation with only 500,000 active histories. A benchmark-fidelity run would of course require many more!
End of explanation
# Run OpenMC
openmc.run()
Explanation: Let's go ahead and execute the simulation! You'll notice that the output for multi-group mode is exactly the same as for continuous-energy. The differences are all under the hood.
End of explanation
# Load the last statepoint file and keff value
sp = openmc.StatePoint('statepoint.' + str(batches) + '.h5')
# Get the OpenMC pin power tally data
mesh_tally = sp.get_tally(name='mesh tally')
fission_rates = mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
fission_rates.shape = mesh.dimension
# Normalize to the average pin power
fission_rates /= np.mean(fission_rates)
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
fission_rates[fission_rates == 0.] = np.nan
# Plot the pin powers and the fluxes
plt.figure()
plt.imshow(fission_rates, interpolation='none', cmap='jet', origin='lower')
plt.colorbar()
plt.title('Pin Powers')
plt.show()
Explanation: Results Visualization
Now that we have run the simulation, let's look at the fission rate and flux tallies that we tallied.
End of explanation |
13,956 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step3: Moving average
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step4: Trend and Seasonality
Step5: Naive Forecast
Step6: Now let's compute the mean absolute error between the forecasts and the predictions in the validation period
Step9: That's our baseline, now let's try a moving average.
Moving Average
Step10: That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time t – 365 from the value at time t.
Step11: Focusing on the validation period
Step12: Great, the trend and seasonality seem to be gone, so now we can use the moving average
Step13: Now let's bring back the trend and seasonality by adding the past values from t – 365
Step14: Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
Just an arbitrary pattern, you can change it if you wish
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
Repeats the same pattern at each period
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
Explanation: Moving average
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/courses/udacity_intro_to_tensorflow_for_deep_learning/l08c03_moving_average.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Setup
End of explanation
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
Explanation: Trend and Seasonality
End of explanation
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
Explanation: Naive Forecast
End of explanation
keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()
Explanation: Now let's compute the mean absolute error between the forecasts and the predictions in the validation period:
End of explanation
def moving_average_forecast(series, window_size):
Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
def moving_average_forecast(series, window_size):
Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast
This implementation is *much* faster than the previous one
mov = np.cumsum(series)
mov[window_size:] = mov[window_size:] - mov[:-window_size]
return mov[window_size - 1:-1] / window_size
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, moving_avg, label="Moving average (30 days)")
keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()
Explanation: That's our baseline, now let's try a moving average.
Moving Average
End of explanation
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series, label="Series(t) – Series(t–365)")
plt.show()
Explanation: That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time t – 365 from the value at time t.
End of explanation
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plt.show()
Explanation: Focusing on the validation period:
End of explanation
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plot_series(time_valid, diff_moving_avg, label="Moving Average of Diff")
plt.show()
Explanation: Great, the trend and seasonality seem to be gone, so now we can use the moving average:
End of explanation
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()
Explanation: Now let's bring back the trend and seasonality by adding the past values from t – 365:
End of explanation
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_smooth_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()
Explanation: Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise:
End of explanation |
13,957 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align
Step1: 下面读取上一节存储的训练集和测试集回测数据,如下所示:
Step2: 1. A股训练集主裁训练
下面开始使用训练集交易数据训练主裁,裁判组合使用两个abupy中内置裁判AbuUmpMainDeg和AbuUmpMainPrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpMainMul和AbuUmpMainDegExtend
第一次运行select:train main ump,然后点击run select,如果已经训练过可select:load main ump直接读取以训练好的主裁:
Step3: 2. 验证A股主裁是否称职
下面首先通过从测试集交易中筛选出来已经有交易结果的交易,如下:
Step4: order_has_result的交易单中记录了所买入时刻的交易特征,如下所示:
Step5: 可以通过一个一个迭代交易单,将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截,这样可以统计每一个主裁的拦截成功率,以及整体拦截率等,如下所示:
备注:
如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例
3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
Step6: 通过把所有主裁的决策进行相加, 如果有投票1的即会进行拦截,四个裁判整体拦截正确率统计:
Step7: 下面统计每一个主裁的拦截正确率:
Step8: 3. A股训练集边裁训练
下面开始使用训练集交易数据训练训裁,裁判组合依然使用两个abupy中内置裁判AbuUmpEdgeDeg和AbuUmpEdgePrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpEdgeMul和AbuUmpEegeDegExtend,如下所示
备注:由于边裁的运行机制,所以边裁的训练非常快,这里直接进行训练,不再从本地读取裁判决策数据
Step9: 4. 验证A股边裁是否称职
使用与主裁类似的方式,一个一个迭代交易单,将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截,统计每一个边裁的拦截成功率,以及整体拦截率等,如下所示:
备注:如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例
Step11: 通过把所有边裁的决策进行统计, 如果有投票-1的结果即判定loss_top的拿出来和真实交易结果result组成结果集,统计四个边裁的整体拦截正确率以及拦截率,如下所示:
Step12: 下面再统计每一个 边裁的拦截正确率:
Step13: 4. 在abu系统中开启主裁拦截模式,开启边裁拦截模式
内置边裁的开启很简单,只需要通过env中的相关设置即可完成,如下所示,分别开启主裁和边裁的两个内置裁判:
Step14: 用户自定义裁判的开启在‘第18节 自定义裁判决策交易‘ 也示例过,通过ump.manager.append_user_ump即可
注意下面还需要把10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛(记录回测特征),因为裁判里面有AbuUmpEegeDegExtend和AbuUmpMainDegExtend,它们需要生成带有10,30,50,90,120日走势拟合角度特征的回测交易单
代码如下所示:
Step15: 买入因子,卖出因子等依然使用相同的设置,如下所示:
Step16: 完成裁判组合的开启,即可开始回测,回测操作流程和之前的操作一样:
下面开始回测,第一次运行select:run loop back ump,然后点击run select_ump,如果已经回测过可select:load test ump data直接从缓存数据读取:
Step17: 下面对比针对A股市场测试集交易开启主裁,边裁拦截和未开启主裁,边裁,结果可以看出拦截了接近一半的交易,胜率以及盈亏比都有大幅度提高: | Python Code:
# 基础库导入
from __future__ import print_function
from __future__ import division
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import ipywidgets
%matplotlib inline
import os
import sys
# 使用insert 0即只使用github,避免交叉使用了pip安装的abupy,导致的版本不一致问题
sys.path.insert(0, os.path.abspath('../'))
import abupy
# 使用沙盒数据,目的是和书中一样的数据环境
abupy.env.enable_example_env_ipython()
from abupy import AbuFactorAtrNStop, AbuFactorPreAtrNStop, AbuFactorCloseAtrNStop, AbuFactorBuyBreak
from abupy import abu, EMarketTargetType, AbuMetricsBase, ABuMarketDrawing, ABuProgress, ABuSymbolPd
from abupy import EMarketTargetType, EDataCacheType, EMarketSourceType, EMarketDataFetchMode, EStoreAbu, AbuUmpMainMul
from abupy import AbuUmpMainDeg, AbuUmpMainJump, AbuUmpMainPrice, AbuUmpMainWave, feature, AbuFeatureDegExtend
from abupy import AbuUmpEdgeDeg, AbuUmpEdgePrice, AbuUmpEdgeWave, AbuUmpEdgeFull, AbuUmpEdgeMul, AbuUmpEegeDegExtend
from abupy import AbuUmpMainDegExtend, ump, Parallel, delayed, AbuMulPidProgress
# 关闭沙盒数据
abupy.env.disable_example_env_ipython()
Explanation: ABU量化系统使用文档
<center>
<img src="./image/abu_logo.png" alt="" style="vertical-align:middle;padding:10px 20px;"><font size="6" color="black"><b>第21节 A股UMP决策</b></font>
</center>
作者: 阿布
阿布量化版权所有 未经允许 禁止转载
abu量化系统github地址 (欢迎+star)
本节ipython notebook
上一节通过切割A股市场训练集测试集symbol,分别对切割的训练集和测试集做了回测,本节将示例A股ump主裁,边裁决策。
首先导入abupy中本节使用的模块:
End of explanation
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
abu_result_tuple = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='train_cn')
abu_result_tuple_test = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_cn')
ABuProgress.clear_output()
print('训练集结果:')
metrics_train = AbuMetricsBase.show_general(*abu_result_tuple, returns_cmp=True ,only_info=True)
print('测试集结果:')
metrics_test = AbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)
Explanation: 下面读取上一节存储的训练集和测试集回测数据,如下所示:
End of explanation
# 需要全局设置为A股市场,在ump会根据市场类型保存读取对应的ump
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
ump_deg=None
ump_mul=None
ump_price=None
ump_main_deg_extend=None
# 使用训练集交易数据训练主裁
orders_pd_train_cn = abu_result_tuple.orders_pd
def train_main_ump():
print('AbuUmpMainDeg begin...')
AbuUmpMainDeg.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
print('AbuUmpMainPrice begin...')
AbuUmpMainPrice.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
print('AbuUmpMainMul begin...')
AbuUmpMainMul.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
print('AbuUmpMainDegExtend begin...')
AbuUmpMainDegExtend.ump_main_clf_dump(orders_pd_train_cn, save_order=False, show_order=False)
# 依然使用load_main_ump,避免下面多进程内存拷贝过大
load_main_ump()
def load_main_ump():
global ump_deg, ump_mul, ump_price, ump_main_deg_extend
ump_deg = AbuUmpMainDeg(predict=True)
ump_mul = AbuUmpMainMul(predict=True)
ump_price = AbuUmpMainPrice(predict=True)
ump_main_deg_extend = AbuUmpMainDegExtend(predict=True)
print('load main ump complete!')
def select(select):
if select == 'train main ump':
train_main_ump()
else:
load_main_ump()
_ = ipywidgets.interact_manual(select, select=['train main ump', 'load main ump'])
Explanation: 1. A股训练集主裁训练
下面开始使用训练集交易数据训练主裁,裁判组合使用两个abupy中内置裁判AbuUmpMainDeg和AbuUmpMainPrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpMainMul和AbuUmpMainDegExtend
第一次运行select:train main ump,然后点击run select,如果已经训练过可select:load main ump直接读取以训练好的主裁:
End of explanation
# 选取有交易结果的数据order_has_result
order_has_result = abu_result_tuple_test.orders_pd[abu_result_tuple_test.orders_pd.result != 0]
Explanation: 2. 验证A股主裁是否称职
下面首先通过从测试集交易中筛选出来已经有交易结果的交易,如下:
End of explanation
order_has_result.filter(regex='^buy(_deg_|_price_|_wave_|_jump)').head()
Explanation: order_has_result的交易单中记录了所买入时刻的交易特征,如下所示:
End of explanation
def apply_ml_features_ump(order, predicter, progress, need_hit_cnt):
if not isinstance(order.ml_features, dict):
import ast
# 低版本pandas dict对象取出来会成为str
ml_features = ast.literal_eval(order.ml_features)
else:
ml_features = order.ml_features
progress.show()
# 将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截
return predicter.predict_kwargs(need_hit_cnt=need_hit_cnt, **ml_features)
def pararllel_func(ump_object, ump_name):
with AbuMulPidProgress(len(order_has_result), '{} complete'.format(ump_name)) as progress:
# 启动多进程进度条,对order_has_result进行apply
ump_result = order_has_result.apply(apply_ml_features_ump, axis=1, args=(ump_object, progress, 2,))
return ump_name, ump_result
if sys.version_info > (3, 4, 0):
# python3.4以上并行处理4个主裁,每一个主裁启动一个进程进行拦截决策
parallel = Parallel(
n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')
out = parallel(delayed(pararllel_func)(ump_object, ump_name)
for ump_object, ump_name in zip([ump_deg, ump_mul, ump_price, ump_main_deg_extend],
['ump_deg', 'ump_mul', 'ump_price', 'ump_main_deg_extend']))
else:
# 3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
out = [pararllel_func(ump_object, ump_name) for ump_object, ump_name in zip([ump_deg, ump_mul, ump_price, ump_main_deg_extend],
['ump_deg', 'ump_mul', 'ump_price', 'ump_main_deg_extend'])]
# 将每一个进程中的裁判的拦截决策进行汇总
for sub_out in out:
order_has_result[sub_out[0]] = sub_out[1]
Explanation: 可以通过一个一个迭代交易单,将交易单中的买入时刻特征传递给ump主裁决策器,让每一个主裁来决策是否进行拦截,这样可以统计每一个主裁的拦截成功率,以及整体拦截率等,如下所示:
备注:
如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例
3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
End of explanation
block_pd = order_has_result.filter(regex='^ump_*')
# 把所有主裁的决策进行相加
block_pd['sum_bk'] = block_pd.sum(axis=1)
block_pd['result'] = order_has_result['result']
# 有投票1的即会进行拦截
block_pd = block_pd[block_pd.sum_bk > 0]
print('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() / block_pd.result.count() * 100))
block_pd.tail()
Explanation: 通过把所有主裁的决策进行相加, 如果有投票1的即会进行拦截,四个裁判整体拦截正确率统计:
End of explanation
from sklearn import metrics
def sub_ump_show(block_name):
sub_block_pd = block_pd[(block_pd[block_name] == 1)]
# 如果失败就正确 -1->1 1->0
sub_block_pd.result = np.where(sub_block_pd.result == -1, 1, 0)
return metrics.accuracy_score(sub_block_pd[block_name], sub_block_pd.result) * 100, sub_block_pd.result.count()
print('角度裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_deg')))
print('角度扩展裁判拦拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_main_deg_extend')))
print('单混裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_mul')))
print('价格裁判拦截正确率{:.2f}%, 拦截交易数量{}'.format(*sub_ump_show('ump_price')))
Explanation: 下面统计每一个主裁的拦截正确率:
End of explanation
# 需要全局设置为A股市场,在ump会根据市场类型保存读取对应的ump
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
print('AbuUmpEdgeDeg begin...')
AbuUmpEdgeDeg.ump_edge_clf_dump(orders_pd_train_cn)
edge_deg = AbuUmpEdgeDeg(predict=True)
print('AbuUmpEdgePrice begin...')
AbuUmpEdgePrice.ump_edge_clf_dump(orders_pd_train_cn)
edge_price = AbuUmpEdgePrice(predict=True)
print('AbuUmpEdgeMul begin...')
AbuUmpEdgeMul.ump_edge_clf_dump(orders_pd_train_cn)
edge_mul = AbuUmpEdgeMul(predict=True)
print('AbuUmpEegeDegExtend begin...')
AbuUmpEegeDegExtend.ump_edge_clf_dump(orders_pd_train_cn)
edge_deg_extend = AbuUmpEegeDegExtend(predict=True)
print('fit edge complete!')
Explanation: 3. A股训练集边裁训练
下面开始使用训练集交易数据训练训裁,裁判组合依然使用两个abupy中内置裁判AbuUmpEdgeDeg和AbuUmpEdgePrice,两个外部自定义裁判使用‘第18节 自定义裁判决策交易‘中编写的AbuUmpEdgeMul和AbuUmpEegeDegExtend,如下所示
备注:由于边裁的运行机制,所以边裁的训练非常快,这里直接进行训练,不再从本地读取裁判决策数据
End of explanation
def apply_ml_features_edge(order, predicter, progress):
if not isinstance(order.ml_features, dict):
import ast
# 低版本pandas dict对象取出来会成为str
ml_features = ast.literal_eval(order.ml_features)
else:
ml_features = order.ml_features
# 边裁进行裁决
progress.show()
# 将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截
edge = predicter.predict(**ml_features)
return edge.value
def edge_pararllel_func(edge, edge_name):
with AbuMulPidProgress(len(order_has_result), '{} complete'.format(edge_name)) as progress:
# # 启动多进程进度条,对order_has_result进行apply
edge_result = order_has_result.apply(apply_ml_features_edge, axis=1, args=(edge, progress,))
return edge_name, edge_result
if sys.version_info > (3, 4, 0):
# python3.4以上并行处理4个边裁的决策,每一个边裁启动一个进程进行拦截决策
parallel = Parallel(
n_jobs=4, verbose=0, pre_dispatch='2*n_jobs')
out = parallel(delayed(edge_pararllel_func)(edge, edge_name)
for edge, edge_name in zip([edge_deg, edge_price, edge_mul, edge_deg_extend],
['edge_deg', 'edge_price', 'edge_mul', 'edge_deg_extend']))
else:
# 3.4下由于子进程中pickle ump的内部类会找不到,所以暂时只使用一个进程一个一个的处理
out = [edge_pararllel_func(edge, edge_name) for edge, edge_name in zip([edge_deg, edge_price, edge_mul, edge_deg_extend],
['edge_deg', 'edge_price', 'edge_mul', 'edge_deg_extend'])]
# 将每一个进程中的裁判的拦截决策进行汇总
for sub_out in out:
order_has_result[sub_out[0]] = sub_out[1]
Explanation: 4. 验证A股边裁是否称职
使用与主裁类似的方式,一个一个迭代交易单,将交易单中的买入时刻特征传递给ump边裁决策器,让每一个边裁来决策是否进行拦截,统计每一个边裁的拦截成功率,以及整体拦截率等,如下所示:
备注:如下的代码使用abupy中再次封装joblib的多进程调度使用示例,以及abupy中封装的多进程进度条的使用示例
End of explanation
block_pd = order_has_result.filter(regex='^edge_*')
由于predict返回的结果中1代表win top
但是我们只需要知道loss_top,所以只保留-1, 其他1转换为0。
block_pd['edge_block'] = \
np.where(np.min(block_pd, axis=1) == -1, -1, 0)
# 拿出真实的交易结果
block_pd['result'] = order_has_result['result']
# 拿出-1的结果,即判定loss_top的
block_pd = block_pd[block_pd.edge_block == -1]
print('四个裁判整体拦截正确率{:.2f}%'.format(block_pd[block_pd.result == -1].result.count() /
block_pd.result.count() * 100))
print('四个边裁拦截交易总数{}, 拦截率{:.2f}%'.format(
block_pd.shape[0],
block_pd.shape[0] / order_has_result.shape[0] * 100))
block_pd.head()
Explanation: 通过把所有边裁的决策进行统计, 如果有投票-1的结果即判定loss_top的拿出来和真实交易结果result组成结果集,统计四个边裁的整体拦截正确率以及拦截率,如下所示:
End of explanation
from sklearn import metrics
def sub_edge_show(edge_name):
sub_edge_block_pd = order_has_result[(order_has_result[edge_name] == -1)]
return metrics.accuracy_score(sub_edge_block_pd[edge_name], sub_edge_block_pd.result) * 100, sub_edge_block_pd.shape[0]
print('角度边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_deg')))
print('单混边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_mul')))
print('价格边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_price')))
print('角度扩展边裁拦截正确率{0:.2f}%, 拦截交易数量{1:}'.format(*sub_edge_show('edge_deg_extend')))
Explanation: 下面再统计每一个 边裁的拦截正确率:
End of explanation
# 开启内置主裁
abupy.env.g_enable_ump_main_deg_block = True
abupy.env.g_enable_ump_main_price_block = True
# 开启内置边裁
abupy.env.g_enable_ump_edge_deg_block = True
abupy.env.g_enable_ump_edge_price_block = True
# 回测时需要开启特征生成,因为裁判开启需要生成特征做为输入
abupy.env.g_enable_ml_feature = True
# 回测时使用上一次切割好的测试集数据
abupy.env.g_enable_last_split_test = True
abupy.beta.atr.g_atr_pos_base = 0.05
Explanation: 4. 在abu系统中开启主裁拦截模式,开启边裁拦截模式
内置边裁的开启很简单,只需要通过env中的相关设置即可完成,如下所示,分别开启主裁和边裁的两个内置裁判:
End of explanation
feature.clear_user_feature()
# 10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛
feature.append_user_feature(AbuFeatureDegExtend)
# 打开使用用户自定义裁判开关
ump.manager.g_enable_user_ump = True
# 先clear一下
ump.manager.clear_user_ump()
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpEegeDegExtend)
# 把新的裁判AbuUmpMainDegExtend类名称使用append_user_ump添加到系统中
ump.manager.append_user_ump(AbuUmpMainDegExtend)
Explanation: 用户自定义裁判的开启在‘第18节 自定义裁判决策交易‘ 也示例过,通过ump.manager.append_user_ump即可
注意下面还需要把10,30,50,90,120日走势拟合角度特征的AbuFeatureDegExtend,做为回测时的新的视角来录制比赛(记录回测特征),因为裁判里面有AbuUmpEegeDegExtend和AbuUmpMainDegExtend,它们需要生成带有10,30,50,90,120日走势拟合角度特征的回测交易单
代码如下所示:
End of explanation
# 初始化资金500万
read_cash = 5000000
# 买入因子依然延用向上突破因子
buy_factors = [{'xd': 60, 'class': AbuFactorBuyBreak},
{'xd': 42, 'class': AbuFactorBuyBreak}]
# 卖出因子继续使用上一节使用的因子
sell_factors = [
{'stop_loss_n': 1.0, 'stop_win_n': 3.0,
'class': AbuFactorAtrNStop},
{'class': AbuFactorPreAtrNStop, 'pre_atr_n': 1.5},
{'class': AbuFactorCloseAtrNStop, 'close_atr_n': 1.5}
]
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
Explanation: 买入因子,卖出因子等依然使用相同的设置,如下所示:
End of explanation
abupy.env.g_market_target = EMarketTargetType.E_MARKET_TARGET_CN
abupy.env.g_data_fetch_mode = EMarketDataFetchMode.E_DATA_FETCH_FORCE_LOCAL
abu_result_tuple_test_ump = None
def run_loop_back_ump():
global abu_result_tuple_test_ump
abu_result_tuple_test_ump, _ = abu.run_loop_back(read_cash,
buy_factors,
sell_factors,
choice_symbols=None,
start='2012-08-08', end='2017-08-08')
# 把运行的结果保存在本地,以便之后分析回测使用,保存回测结果数据代码如下所示
abu.store_abu_result_tuple(abu_result_tuple_test_ump, n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_ump_cn')
ABuProgress.clear_output()
def run_load_ump():
global abu_result_tuple_test_ump
abu_result_tuple_test_ump = abu.load_abu_result_tuple(n_folds=5, store_type=EStoreAbu.E_STORE_CUSTOM_NAME,
custom_name='test_ump_cn')
def select_ump(select):
if select == 'run loop back ump':
run_loop_back_ump()
else:
run_load_ump()
_ = ipywidgets.interact_manual(select_ump, select=['run loop back ump', 'load test ump data'])
Explanation: 完成裁判组合的开启,即可开始回测,回测操作流程和之前的操作一样:
下面开始回测,第一次运行select:run loop back ump,然后点击run select_ump,如果已经回测过可select:load test ump data直接从缓存数据读取:
End of explanation
AbuMetricsBase.show_general(*abu_result_tuple_test_ump, returns_cmp=True, only_info=True)
AbuMetricsBase.show_general(*abu_result_tuple_test, returns_cmp=True, only_info=True)
Explanation: 下面对比针对A股市场测试集交易开启主裁,边裁拦截和未开启主裁,边裁,结果可以看出拦截了接近一半的交易,胜率以及盈亏比都有大幅度提高:
End of explanation |
13,958 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Do generations exist?
This notebook contains a "one-day paper", my attempt to pose a research question, answer it, and publish the results in one work day (May 13, 2016).
Copyright 2016 Allen B. Downey
MIT License
Step1: What's a generation supposed to be, anyway?
If generation names like "Baby Boomers" and "Generation X" are just a short way of referring to people born during certain intervals, you can use them without implying that these categories have any meaningful properties.
But if these names are supposed to refer to generations with identifiable characteristics, we can test whether these generations exist. In this notebook, I suggest one way to formulate generations as a claim about the world, and test it.
Suppose we take a representative sample of people in the U.S., divide them into cohorts by year of birth, and measure the magnitude of the differences between consecutive cohorts. Of course, there are many ways we could define and measure these differences; I'll suggest one in a minute.
But ignoring the details for now, what would those difference look like if generations exist? Presumably, the differences between successive cohorts would be relatively small within each generation, and bigger between generations.
If we plot the cumulative total of these differences, we expect to see something like the figure below (left), with relatively fast transitions (big differences) between generations, and periods of slow change (small differences) within generations.
On the other hand, if there are no generations, we expect the differences between successive cohorts to be about the same. In that case the cumulative differences should look like a straight line, as in the figure below (right)
Step2: Then I load the data itself
Step3: I'm going to drop two variables that turned out to be mostly N/A
Step4: And then replace the special codes 8, 9, and 0 with N/A
Step5: For the age variable, I also have to replace 99 with N/A
Step6: Here's an example of a typical variable on a 5-point Likert scale.
Step7: I have to compute year born
Step8: Here's what the distribution looks like. The survey includes roughly equal numbers of people born each year from 1922 to 1996.
Step9: Next I sort the respondents by year born and then assign them to cohorts so there are 200 people in each cohort.
Step10: I end up with the same number of people in each cohort (except the last).
Step11: Then I can group by cohort.
Step12: I'll instantiate an object for each cohort.
Step13: To compute the difference between successive cohorts, I'll loop through the questions, compute Pmfs to represent the responses, and then compute the difference between Pmfs.
I'll use two functions to compute these differences. One computes the difference in means
Step14: The other computes the Jensen-Shannon divergence
Step15: First I'll loop through the groups and make Cohort objects
Step16: Each cohort spans a range about 3 birth years. For example, the cohort at index 10 spans 1965 to 1967.
Step17: Here's the total divergence between the first two cohorts, using the mean difference between Pmfs.
Step18: And here's the total J-S divergence
Step19: This loop computes the (absolute value) difference between successive cohorts and the cumulative sum of the differences.
Step20: The results are a nearly straight line, suggesting that there are no meaningful generations, at least as I've formulated the question.
Step21: The results looks pretty much the same using J-S divergence. | Python Code:
from __future__ import print_function, division
from thinkstats2 import Pmf, Cdf
import thinkstats2
import thinkplot
import pandas as pd
import numpy as np
from scipy.stats import entropy
%matplotlib inline
Explanation: Do generations exist?
This notebook contains a "one-day paper", my attempt to pose a research question, answer it, and publish the results in one work day (May 13, 2016).
Copyright 2016 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
dct = thinkstats2.ReadStataDct('GSS.dct')
Explanation: What's a generation supposed to be, anyway?
If generation names like "Baby Boomers" and "Generation X" are just a short way of referring to people born during certain intervals, you can use them without implying that these categories have any meaningful properties.
But if these names are supposed to refer to generations with identifiable characteristics, we can test whether these generations exist. In this notebook, I suggest one way to formulate generations as a claim about the world, and test it.
Suppose we take a representative sample of people in the U.S., divide them into cohorts by year of birth, and measure the magnitude of the differences between consecutive cohorts. Of course, there are many ways we could define and measure these differences; I'll suggest one in a minute.
But ignoring the details for now, what would those difference look like if generations exist? Presumably, the differences between successive cohorts would be relatively small within each generation, and bigger between generations.
If we plot the cumulative total of these differences, we expect to see something like the figure below (left), with relatively fast transitions (big differences) between generations, and periods of slow change (small differences) within generations.
On the other hand, if there are no generations, we expect the differences between successive cohorts to be about the same. In that case the cumulative differences should look like a straight line, as in the figure below (right):
So, how should we quantify the differences between successive cohorts. When people talk about generational differences, they are often talking about differences in attitudes about political, social issues, and other cultural questions. Fortunately, these are exactly the sorts of things surveyed by the General Social Survey (GSS).
To gather data, I selected question from the GSS that were asked during the last three cycles (2010, 2012, 2014) and that were coded on a 5-point Likert scale.
You can see the variables that met these criteria, and download the data I used, here:
https://gssdataexplorer.norc.org/projects/13170/variables/data_cart
Now let's see what we got.
First I load the data dictionary, which contains the metadata:
End of explanation
df = dct.ReadFixedWidth('GSS.dat')
Explanation: Then I load the data itself:
End of explanation
df.drop(['immcrime', 'pilloky'], axis=1, inplace=True)
Explanation: I'm going to drop two variables that turned out to be mostly N/A
End of explanation
df.ix[:, 3:] = df.ix[:, 3:].replace([8, 9, 0], np.nan)
df.head()
Explanation: And then replace the special codes 8, 9, and 0 with N/A
End of explanation
df.age.replace([99], np.nan, inplace=True)
Explanation: For the age variable, I also have to replace 99 with N/A
End of explanation
thinkplot.Hist(Pmf(df.choices))
Explanation: Here's an example of a typical variable on a 5-point Likert scale.
End of explanation
df['yrborn'] = df.year - df.age
Explanation: I have to compute year born
End of explanation
pmf_yrborn = Pmf(df.yrborn)
thinkplot.Cdf(pmf_yrborn.MakeCdf())
Explanation: Here's what the distribution looks like. The survey includes roughly equal numbers of people born each year from 1922 to 1996.
End of explanation
df_sorted = df[~df.age.isnull()].sort_values(by='yrborn')
df_sorted['counter'] = np.arange(len(df_sorted), dtype=int) // 200
df_sorted[['year', 'age', 'yrborn', 'counter']].head()
df_sorted[['year', 'age', 'yrborn', 'counter']].tail()
Explanation: Next I sort the respondents by year born and then assign them to cohorts so there are 200 people in each cohort.
End of explanation
thinkplot.Cdf(Cdf(df_sorted.counter))
None
Explanation: I end up with the same number of people in each cohort (except the last).
End of explanation
groups = df_sorted.groupby('counter')
Explanation: Then I can group by cohort.
End of explanation
class Cohort:
skip = ['year', 'id_', 'age', 'yrborn', 'cohort', 'counter']
def __init__(self, name, df):
self.name = name
self.df = df
self.pmf_map = {}
def make_pmfs(self):
for col in self.df.columns:
if col in self.skip:
continue
self.pmf_map[col] = Pmf(self.df[col].dropna())
try:
self.pmf_map[col].Normalize()
except ValueError:
print(self.name, col)
def total_divergence(self, other, divergence_func):
total = 0
for col, pmf1 in self.pmf_map.items():
pmf2 = other.pmf_map[col]
divergence = divergence_func(pmf1, pmf2)
#print(col, pmf1.Mean(), pmf2.Mean(), divergence)
total += divergence
return total
Explanation: I'll instantiate an object for each cohort.
End of explanation
def MeanDivergence(pmf1, pmf2):
return abs(pmf1.Mean() - pmf2.Mean())
Explanation: To compute the difference between successive cohorts, I'll loop through the questions, compute Pmfs to represent the responses, and then compute the difference between Pmfs.
I'll use two functions to compute these differences. One computes the difference in means:
End of explanation
def JSDivergence(pmf1, pmf2):
xs = set(pmf1.Values()) | set(pmf2.Values())
ps = np.asarray(pmf1.Probs(xs))
qs = np.asarray(pmf2.Probs(xs))
ms = ps + qs
return 0.5 * (entropy(ps, ms) + entropy(qs, ms))
Explanation: The other computes the Jensen-Shannon divergence
End of explanation
cohorts = []
for name, group in groups:
cohort = Cohort(name, group)
cohort.make_pmfs()
cohorts.append(cohort)
len(cohorts)
Explanation: First I'll loop through the groups and make Cohort objects
End of explanation
cohorts[11].df.yrborn.describe()
Explanation: Each cohort spans a range about 3 birth years. For example, the cohort at index 10 spans 1965 to 1967.
End of explanation
cohorts[0].total_divergence(cohorts[1], MeanDivergence)
Explanation: Here's the total divergence between the first two cohorts, using the mean difference between Pmfs.
End of explanation
cohorts[0].total_divergence(cohorts[1], JSDivergence)
Explanation: And here's the total J-S divergence:
End of explanation
res = []
cumulative = 0
for i in range(len(cohorts)-1):
td = cohorts[i].total_divergence(cohorts[i+1], MeanDivergence)
cumulative += td
print(i, td, cumulative)
res.append((i, cumulative))
Explanation: This loop computes the (absolute value) difference between successive cohorts and the cumulative sum of the differences.
End of explanation
xs, ys = zip(*res)
thinkplot.Plot(xs, ys)
thinkplot.Config(xlabel='Cohort #',
ylabel='Cumulative difference in means',
legend=False)
Explanation: The results are a nearly straight line, suggesting that there are no meaningful generations, at least as I've formulated the question.
End of explanation
res = []
cumulative = 0
for i in range(len(cohorts)-1):
td = cohorts[i].total_divergence(cohorts[i+1], JSDivergence)
cumulative += td
print(i, td, cumulative)
res.append((i, cumulative))
xs, ys = zip(*res)
thinkplot.Plot(xs, ys)
thinkplot.Config(xlabel='Cohort #',
ylabel='Cumulative JS divergence',
legend=False)
Explanation: The results looks pretty much the same using J-S divergence.
End of explanation |
13,959 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Intro to Git
Authors
Step1: Looking at files in a repo
A repository is just a directory. Let's poke around.
Step2: The special .git directory is where git stores all its magic. If you delete it (or this whole directory), the repository won't be a repository any more.
Step3: Making changes
Step4: Creating the file only changed the local filesystem. We can go to the repository page on Github to verify that the file hasn't been added yet. You probably wouldn't want your changes to be published immediately to the world!
Step5: If you check again, our file still hasn't been published to the world. In git, you package together your new files and updates to old files, and then you create a new version called a "commit."
Git maintains a "staging" or "index" area for files that you've marked for committing with git add.
Step6: Now our local repository has this new commit in it. Notice that the log shows the message we wrote when we made the commit. It is very tempting to write something like "stuff" here. But then it will be very hard to understand your history, and you'll lose some of the benefits of git.
For the same reason, try to make each commit a self-contained idea
Step7: Now our commit is finally visible on Github. Even if we spill coffee on our laptop, our new state will be safely recorded in the remote repository.
Going back
Oops, we didn't want that file! In fact, if you look at the history, people have been adding a bunch of silly files. We don't want any of them.
Once a commit is created, git basically never forgets about it or its contents (unless you try really hard). When your local filesystem doesn't have any outstanding changes, it's easy to switch back to an older commit.
We have previously given the name first to the first commit in the repo, which had basically nothing in it. (We'll soon see how to assign names to commits.)
Step8: Note
Step9: How does committing work?
Every commit is a snapshot of some files. A commit can never be changed. It has a unique ID assigned by git, like 20f97c1.
Humans can't work with IDs like that, so git lets us give names like master or first to commits, using git branch <name> <commit ID>. These names are called "branches" or "refs" or "tags." They're just names. Often master is used for the most up-to-date commit in a repository, but not always.
At any point in time, your repository is pointing to a commit. Except in unusual cases, that commit will have a name. Git gives that name its own name
Step10: Here origin is the name (according to git remote -v) of the repository you want to push to. If you omit a remote name, origin is also the default. Normally that's what you want.
going-back-{our_id} (whatever the value of {our_id}) is a branch in your repository. If you omit a branch name here, your current branch (the branch HEAD refers to) is the default.
What do you think git does?
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
A few things happen
Step11: Now we go back to our original repo.
Step12: You might just want the update. Or maybe you want to push your own commit to the same branch, and your git push failed.
Git has a command called pull that you could use. But it's complicated, and it's easier to break it down into two steps
Step13: Now we need to update our ref to the newer commit. In this case, it's easy, because we didn't have any further commits. Git calls that a "fast-forward" merge. | Python Code:
cd /tmp
# Delete the repo if it happens to already exist:
!rm -rf git-intro
# Create the repo
!git clone https://github.com/DS-100/git-intro git-intro
!ls -lh | grep git-intro
cd git-intro
Explanation: Intro to Git
Authors: Henry Milner, Andrew Do. Some of the material in this notebook is inspired by lectures by Prof. George Necula in CS 169.
Why git?
Your first reason for this class (any likely many classes and projects to come): It's the only way to interact with other developers, because everyone uses it.
"Everyone?" Yes. Github, the biggest host for public git repositories, has 20 million repositories. There are probably many more private repositories. (You can create either.)
Better reasons:
* Work without fear. If you make a change that breaks something (or just wasn't a good idea), you can always go back.
* Work on multiple computers. Much simpler and less error-prone than emailing yourself files.
* Collaborate with other developers.
* Maintain multiple versions.
However, git can be a little confusing. Many confusions happen because people don't understand the fundamentals you'll learn today. If you've got the basics, the impact of other confusions will be bounded, and you can probably figure out how to search for a solution.
Cloning an existing repository
We made a special repository for this section (it takes 5 seconds) here:
https://github.com/DS-100/git-intro
We'll use a Jupyter notebook, but you can run any of these commands in a Bash shell. Note that cd is a magic command in Jupyter that doesn't have a ! in front of it. !cd only works for the line you write it on.
We'll check out the repo in the /tmp folder, which the OS will wipe when you reboot. Obviously, don't do that if you want to keep the repo.
End of explanation
# What files are in the repo?
!ls -lh
# What about hidden files?
!ls -alh
Explanation: Looking at files in a repo
A repository is just a directory. Let's poke around.
End of explanation
# What's the current status, according to git?
!git status
# What's the history of the repo?
!git log
# What does README.md look like currently?
!cat README.md
Explanation: The special .git directory is where git stores all its magic. If you delete it (or this whole directory), the repository won't be a repository any more.
End of explanation
# We can use Python to compute the filename.
# Then we can reference Python variables in
# ! shell commands using {}, because Jupyter
# is magic.
import datetime
our_id = datetime.datetime.now().microsecond
filename = "our_file_{:d}.txt".format(our_id)
filename
!echo "The quick brown fox \
jumped over the lzy dog." > "{filename}"
!ls
Explanation: Making changes: Our first commit
Suppose we want to add a file. You could create a Jupyter notebook or download an image. For simplicity, we'll just add a text file.
End of explanation
!git add "{filename}"
Explanation: Creating the file only changed the local filesystem. We can go to the repository page on Github to verify that the file hasn't been added yet. You probably wouldn't want your changes to be published immediately to the world!
End of explanation
!git status
!git commit -m 'Added our new file, "{filename}"'
!git status
!git log
Explanation: If you check again, our file still hasn't been published to the world. In git, you package together your new files and updates to old files, and then you create a new version called a "commit."
Git maintains a "staging" or "index" area for files that you've marked for committing with git add.
End of explanation
!git remote -v
!git help push
!git push origin
Explanation: Now our local repository has this new commit in it. Notice that the log shows the message we wrote when we made the commit. It is very tempting to write something like "stuff" here. But then it will be very hard to understand your history, and you'll lose some of the benefits of git.
For the same reason, try to make each commit a self-contained idea: You fixed a particular bug, added a particular feature, etc.
Our commit hasn't been published to other repositories yet, including the one on Github. We can check again to verify that.
To publish a commit we've created locally to another repository, we use git push. Git remembers that we checked out from the Github repository, and by default it will push to that repository. Just to be sure, let's find the name git has given to that repository, and pass that explicitly to git push.
End of explanation
!git help branch
!git branch --list
# Let's make a new name for the first commit, "going-back",
# with our ID in there so we don't conflict with other
# sections.
!git branch going-back-{our_id} first
!git branch --list
!git checkout going-back-{our_id}
!ls
!git status
!git log --graph --decorate first going-back-{our_id} master
Explanation: Now our commit is finally visible on Github. Even if we spill coffee on our laptop, our new state will be safely recorded in the remote repository.
Going back
Oops, we didn't want that file! In fact, if you look at the history, people have been adding a bunch of silly files. We don't want any of them.
Once a commit is created, git basically never forgets about it or its contents (unless you try really hard). When your local filesystem doesn't have any outstanding changes, it's easy to switch back to an older commit.
We have previously given the name first to the first commit in the repo, which had basically nothing in it. (We'll soon see how to assign names to commits.)
End of explanation
new_filename = "our_second_file_{}.txt".format(our_id)
new_filename
!echo "Text for our second file!" > {new_filename}
!ls
!git add {new_filename}
!git commit -m'Adding our second file!'
!git status
!git log --graph --decorate first going-back-{our_id} master
Explanation: Note: we can always get back to the commit we made with:
git checkout master
Branches and commits
Git informs us that we've switched to the going-back "branch," and in the local filesystem, neither the file we created nor any other files, other than README.md, are there any more. What do you think would happen if we made some changes and made a new commit now?
A. The previous commits would be overwritten. The master branch would disappear.
B. The previous commits would be overwritten. The master branch would now refer to our new commit.
C. A new commit would be created. The master branch would still refer to our last commit. The first branch would refer to the new commit.
D. A new commit would be created. The master branch would still refer to our last commit. The first branch would still refer to the first commit in the repository.
E. Git would ask us what to do, because it's not clear what we intended.
F. Something else?
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
Let's find out.
End of explanation
!git push origin going-back-{our_id}
Explanation: How does committing work?
Every commit is a snapshot of some files. A commit can never be changed. It has a unique ID assigned by git, like 20f97c1.
Humans can't work with IDs like that, so git lets us give names like master or first to commits, using git branch <name> <commit ID>. These names are called "branches" or "refs" or "tags." They're just names. Often master is used for the most up-to-date commit in a repository, but not always.
At any point in time, your repository is pointing to a commit. Except in unusual cases, that commit will have a name. Git gives that name its own name: HEAD. Remember: HEAD is a special kind of name. It refers to other names rather than to a commit.
<img src="before_commit.jpg">
When you commit:
Git creates your new commit.
To keep track of its lineage, git records that your new commit is a "child" of the current commit. That's what the lines in that git log line are showing.
Git updates whatever name HEAD points to (your "current branch"). Now that name refers to the new commit.
<img src="after_commit.jpg">
Can you list all the pieces that make up the full state of your git repository?
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
All the commits with their IDs.
All the pointers from commits to their parents (the previous commit they built on).
All your "refs," each pointing to a commit.
The HEAD, which points to a ref.
The "working directory," which is all the actual files you see.
The "index" or "staging" area, which is all the files you've added with git add but haven't committed yet. (You can find out what's staged with git status. The staging area is confusing, so use it sparingly. Usually you should stage things and then immediately create a commit.)
A list of "remotes," which are other repositories your repository knows about. Often this is just the repository you cloned.
The last-known state of the remotes' refs.
[...there are more, but these are the main ones.]
How does pushing work?
In git, every repository is coequal. The repository we cloned from Github looks exactly like ours, except it might contain different commits and names.
Suppose you want to publish your changes.
End of explanation
cd /tmp
!git clone https://github.com/DS-100/git-intro git-intro-2
cd /tmp/git-intro-2
!git checkout going-back-{our_id}
third_filename = "our_third_file_{}.txt".format(our_id)
third_filename
!echo "Someone else added this third file!" > {third_filename}
!git add {third_filename}
!git commit -m"Adding a third file!"
!git push
Explanation: Here origin is the name (according to git remote -v) of the repository you want to push to. If you omit a remote name, origin is also the default. Normally that's what you want.
going-back-{our_id} (whatever the value of {our_id}) is a branch in your repository. If you omit a branch name here, your current branch (the branch HEAD refers to) is the default.
What do you think git does?
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
A few things happen:
1. Git finds all the commits in going-back-{our_id}'s history - all of its ancestors.
2. It sends all of those commits to origin, and they're added to that repository. (If origin already has a bunch of them, of course those don't need to be sent.)
3. It updates the branch named going-back-{our_id} in origin to point to the same commit yours does.
However, suppose someone else has updated going-back-{our_id} since you last got it?
456 (your going-back-{our_id})
\ 345 (origin's going-back-{our_id}, pushed by someone else)
\ /
\ /
234 (going-back-{our_id} when you last pulled it from origin)
|
123
How do you think git handles that?
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
<br>
The answer may surprise you: git gives up and tells you you're not allowed to push. Instead, you have to pull the remote commits and merge them in your repository, then push after merging.
error: failed to push some refs to 'https://github.com/DS-100/git-intro.git'
hint: Updates were rejected because the remote contains work that you do
hint: not have locally. This is usually caused by another repository pushing
hint: to the same ref. You may want to first integrate the remote changes
hint: (e.g., 'git pull ...') before pushing again.
hint: See the 'Note about fast-forwards' in 'git push --help' for details.
We'll go over merging next, but the end result after merging will look like this:
567 (your going-back-{our_id})
| \
| \
| \
456 \
\ 345 (origin's going-back-{our_id}, pushed by someone else)
\ /
\ /
234 (going-back-{our_id} when you last pulled it from origin)
|
123
Then git push origin going-back-{our_id} would succeed, since there are now no conflicts. We're updating going-back-{our_id} to a commit that's a descendant of the current commit going-back-{our_id} names in origin.
So it remains to see how to accomplish a merge. We need to start with pulling updates from other repositories.
How does pulling work?
Suppose someone else pushes a commit to the remote repository. We can simulate that with our own second repository:
End of explanation
cd /tmp/git-intro
Explanation: Now we go back to our original repo.
End of explanation
!git help fetch
!git fetch origin
!git log --graph --decorate going-back-{our_id} origin/going-back-{our_id}
Explanation: You might just want the update. Or maybe you want to push your own commit to the same branch, and your git push failed.
Git has a command called pull that you could use. But it's complicated, and it's easier to break it down into two steps: fetching and merging.
Since git commits are never destroyed, it's always safe to fetch commits from another repository. (Refs can be changed, so that's not true for refs. That's the source of the problem with our push before!)
End of explanation
!git merge origin/going-back-{our_id} --ff-only
!git log --graph --decorate
Explanation: Now we need to update our ref to the newer commit. In this case, it's easy, because we didn't have any further commits. Git calls that a "fast-forward" merge.
End of explanation |
13,960 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Win/Loss Betting Model
Step1: Obtain results of teams within the past year
Step2: Pymc Model
Determining Binary Win Loss
Step3: Save Model
Step4: Diagnostics
Step5: Moar Plots
Step6: Non-MCMC Model | Python Code:
import pandas as pd
import numpy as np
import datetime as dt
from scipy.stats import norm, bernoulli
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
from spcl_case import *
plt.style.use('fivethirtyeight')
Explanation: Win/Loss Betting Model
End of explanation
h_matches = pd.read_csv('hltv_csv/matchResults.csv').set_index('Match ID')
h_matches['Date'] = pd.to_datetime(h_matches['Date'])
h_teams = pd.read_csv('hltv_csv/teams_w_ranking.csv')
h_teams = fix_teams(h_teams.set_index('ID'))
h_players = pd.read_csv('hltv_csv/matchLineups.csv').set_index('Match ID')
h_player_names = pd.read_csv('hltv_csv/players.csv').set_index('ID')
MIN_DATE = dt.datetime(2017,1,1)
EVENT_SET = 'eslpl'
FILTER_TEAMS = {'eslpl': ['OpTic', 'SK', 'Cloud9', 'Liquid', 'Luminosity', 'Misfits', 'Renegades', 'Immortals',
'Splyce', 'compLexity', 'Rogue', 'Ghost', 'CLG', 'NRG', 'FaZe', 'North',
'BIG', 'LDLC', 'mousesports', 'EnVyUs', 'NiP', 'Virtus.pro',
'Astralis', 'G2', 'GODSENT', 'Heroic', 'fnatic', 'NiP', 'Heroic'],
'mdleu': ['Virtus.pro', 'FlipSid3', 'eXtatus', 'AGO', 'Fragsters', 'Gambit', 'PRIDE', '1337HUANIA',
'VITALIS', 'Epsilon', 'CHAOS', 'Crowns', 'MK', 'Japaleno', 'Not Academy', 'aAa', 'Space Soldiers',
'Singularity', 'Nexus', 'Invictus Aquilas', 'Spirit', 'Kinguin', 'Seed', 'Endpoint', 'iGame.com', 'TEAM5',
'ALTERNATE aTTaX'],
'mdlna': ['Gale Force', 'FRENCH CANADIANS', 'Mythic', 'GX', 'Beacon', 'Torqued', 'Rise Nation', 'Denial', 'subtLe',
'SoaR', 'Muffin Lightning', 'Iceberg', 'ex-Nitrious', 'Adaptation', 'Morior Invictus', 'Naventic', 'CheckSix', 'Good People'
, 'LFAO', 'CLG Academy', 'Ambition', 'Mostly Harmless', 'Gorilla Core', 'ex-Nitrious', 'ANTI ECO'],
'mdlau': ['Grayhound', 'Tainted Minds', 'Kings', 'Chiefs', 'Dark Sided', 'seadoggs', 'Athletico', 'Legacy',
'SIN', 'Noxide', 'Control', 'SYF', 'Corvidae', 'Funkd', 'Masterminds', 'Conspiracy', 'AVANT']
}
h_matches = h_matches[h_matches['Date'] >= MIN_DATE]
h_matches['winner'] = h_matches.apply(lambda x: x['Team 1 Score'] > x['Team 2 Score'], axis=1)
h_matches['score_diff'] = h_matches['Team 1 Score'] - h_matches['Team 2 Score']
h_matches = h_matches.join(h_players)
player_plays = h_matches[['Map', 'score_diff', 'winner'] + player_col_names].melt(value_vars=player_col_names)
player_plays = player_plays['value'].value_counts()
player_plays.hist(bins=30)
print(np.mean(player_plays > 10))
filt_players = player_plays[player_plays > 10].index
h_matches = h_matches[h_matches[player_col_names].isin(filt_players).all(axis=1)]
print(len(filt_players))
player_col_names = ['Team 1 Player 1', 'Team 1 Player 2', 'Team 1 Player 3', 'Team 1 Player 4', 'Team 1 Player 5',
'Team 2 Player 1', 'Team 2 Player 2', 'Team 2 Player 3', 'Team 2 Player 4', 'Team 2 Player 5',]
obs = h_matches[['Map', 'score_diff', 'winner'] + player_col_names]
obs = obs[obs.Map != 'Default'].dropna(axis=0)
obs.head()
players = np.sort(np.unique(np.concatenate(obs[player_col_names].values)))
maps = obs.Map.unique()
tmap = {v:k for k,v in dict(enumerate(players)).items()}
mmap = {v:k for k,v in dict(enumerate(maps)).items()}
n_players = len(players)
n_maps = len(maps)
print('Number of Players: %i ' % n_players)
print('Number of Matches: %i ' % len(h_matches))
print('Number of Maps: %i '% n_maps)
Explanation: Obtain results of teams within the past year
End of explanation
import pymc3 as pm
import theano.tensor as tt
obs_map = obs['Map'].map(mmap).values
obs_team = obs.reset_index()[player_col_names].apply(lambda x: x.map(tmap).values, axis=1).values
obs_team_1 = obs_team[:, :5]
obs_team_2 = obs_team[:, 5:10]
with pm.Model() as rating_model:
omega = pm.HalfCauchy('omega', 0.5)
tau = pm.HalfCauchy('tau', 0.5)
rating = pm.Normal('rating', 0, omega, shape=n_players)
theta_tilde = pm.Normal('rate_t', mu=0, sd=1, shape=(n_maps, n_players))
rating_map = pm.Deterministic('rating | map', rating + tau * theta_tilde).flatten()
diff = tt.sum(rating_map[obs_map[:,np.newaxis]*n_players+obs_team_1], axis=1) - tt.sum(rating_map[obs_map[:,np.newaxis]*n_players+obs_team_2], axis=1)
#p = 0.5*tt.tanh(diff)+0.5
alpha = 0.5
sigma = pm.HalfCauchy('sigma', 0.5)
sc = pm.Normal('observed score diff', 16*tt.tanh(alpha*diff), sigma, observed=obs['score_diff'])
#wl = pm.Bernoulli('observed wl', p=p, observed=obs['winner'].values)
with rating_model:
approx = pm.fit(20000, method='advi')
ap_trace = approx.sample(1000)
with rating_model:
trace = pm.sample(1000, n_init=20000, init='jitter+adapt_diag', nuts_kwargs={'target_accept': 0.90, 'max_treedepth': 14}, tune=550) # tune=1000, nuts_kwargs={'target_accept': 0.95}
some_special_list = [3741, 4959, 8797, 9216, 9219, 1916, 317, 2553,8611]
filt = h_player_names.loc[some_special_list]
sns.set_palette('Paired', 10)
f, ax = plt.subplots(figsize=(16,10))
ax.set_ylim(0,4.0)
[sns.kdeplot(ap_trace['rating'][:,tmap[i]], shade=True, alpha=0.55, legend=True, ax=ax, label=v['Name']) for i,v in filt.iterrows()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
{v['Name']: [ap_trace['rating'][:,tmap[i]].mean(), ap_trace['rating'][:,tmap[i]].std()] for i,v in filt.iterrows()}
Explanation: Pymc Model
Determining Binary Win Loss: $wl_{m,i,j}$
$$
\omega, \tau, \sim HC(0.5) \
R_{k} \sim N(0, \omega^2) \
\tilde{\theta}{m,k} \sim N(0,1) \
R{m,k} = R_{k} + \tau\tilde{\theta} \
wl_{m,i,j} \sim B(p = \text{Sig}(R_{m,i}-R_{m,j})) \
$$
and score difference: $sc_{m,i,j}$
$$
\alpha \sim Gamma(10,5) \
\kappa_{m,i,j} = 32\text{Sig}(\alpha(R_{m,i}-R_{m,j}))-16 \
\sigma_{m} \sim HC(0.5) \
sc_{m,i,j} \sim N(\kappa, \sigma_{m}^2)
$$
End of explanation
EVENT_SET = 'all_player_sc'
pm.backends.text.dump('saved_model/'+EVENT_SET+'/trace', trace)
np.save('saved_model/'+EVENT_SET+'/players.npy', players)
np.save('saved_model/'+EVENT_SET+'/maps.npy', maps)
Explanation: Save Model
End of explanation
with rating_model:
approx = pm.fit(15000)
ap_trace = approx.sample(5000)
print('Gelman Rubin: %s' % pm.diagnostics.gelman_rubin(trace))
print('Effective N: %s' % pm.diagnostics.effective_n(trace))
print('Accept Prob: %.4f' % trace.get_sampler_stats('mean_tree_accept').mean())
print('Percentage of Divergent %.5f' % (trace['diverging'].nonzero()[0].size/float(len(trace))))
pm.traceplot(trace, varnames=['sigma', 'omega', 'tau'])
rating_model.profile(pm.gradient(rating_model.logpt, rating_model.vars), n=100).summary()
rating_model.profile(rating_model.logpt, n=100).summary()
Explanation: Diagnostics
End of explanation
sns.set_palette('Paired', n_teams)
f, ax = plt.subplots(figsize=(16,10))
ax.set_ylim(0,2.0)
[sns.kdeplot(trace['sigma'][:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=m) for i,m in enumerate(maps)]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
f, axes = plt.subplots(n_maps,1,figsize=(12,34), sharex=True)
for m, ax in enumerate(axes):
ax.set_title(dict(enumerate(maps))[m])
ax.set_ylim(0,2.0)
[sns.kdeplot(trace['rating | map'][:,m,tmap[i]], shade=True, alpha=0.55, legend=False ,
ax=ax, label=v['Name']) for i,v in filt.iterrows()]
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
filt
i = np.where(teams==7880)
j = np.where(teams==7924)
diff = (trace['rating'][:,j] - trace['rating'][:,i]).flatten()
kappa = 32./(1+np.exp(-1.*trace['alpha']*diff))-16.
fig, (ax1,ax2) = plt.subplots(1,2,figsize=(10,6))
sns.kdeplot(kappa, ax=ax2)
sns.kdeplot(diff, ax=ax1)
Explanation: Moar Plots
End of explanation
def vec2dict(s, n_teams):
return {
'mu': np.array(s[:n_teams]),
'sigma': np.array(s[n_teams:n_teams*2]),
'beta': s[-1],
}
def dict2vec(s):
return s['mu'] + s['sigma'] + [s['beta']]
skills_0 = dict2vec({
'mu': [1000]*n_teams,
'sigma': [300]*n_teams,
'beta': 50
})
from scipy.optimize import minimize
def loglike(y,p):
return -1.*(np.sum(y*np.log(p)+(1-y)*np.log(1.-p)))
def obj(skills):
s = vec2dict(skills, n_teams)
mean_diff = s['mu'][obs['Team 1 ID'].map(tmap).values] - s['mu'][obs['Team 2 ID'].map(tmap).values]
var_diff = s['sigma'][obs['Team 1 ID'].map(tmap).values]**2 + s['sigma'][obs['Team 2 ID'].map(tmap).values]**2 + skills[-1]**2
p = 1.-norm.cdf(0., loc=mean_diff, scale = np.sqrt(var_diff))
return loglike((obs['Team 1 ID'] == obs['winner']).values, p)
obj(skills_0)
opt_skill = g.x
print(opt_skill)
plots = norm.rvs(opt_skill[:5], opt_skill[5:-1], size=(2000,5))
f, ax = plt.subplots(figsize=(12,8))
[sns.kdeplot(plots[:,i], shade=True, alpha=0.55, legend=True, ax=ax, label=i) for i in range(5)]
Explanation: Non-MCMC Model
End of explanation |
13,961 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'messy-consortium', 'sandbox-1', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: MESSY-CONSORTIUM
Source ID: SANDBOX-1
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:10
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation |
13,962 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pyplot tutorial
Khi chỉ cung cấp 1 list cho hàm plot() matplotlib sẽ giả sử nó là y values và tự động tạo các giá trị x mặc định (bắt đầu từ 0, có cùng len với y ).
Step1: Nếu được cung cấp 2 list thì
Step2: Có thể sử dụng numpy để truyền tham số cho plot(). Và có thể vẽ nhiều plot cùng lúc
Step3: Controlling line properties
Line có một số thuộc tính | Python Code:
import matplotlib.pyplot as plt
plt.plot([1,2,3,4])
plt.ylabel('some numbers')
plt.show()
Explanation: Pyplot tutorial
Khi chỉ cung cấp 1 list cho hàm plot() matplotlib sẽ giả sử nó là y values và tự động tạo các giá trị x mặc định (bắt đầu từ 0, có cùng len với y ).
End of explanation
import matplotlib.pyplot as plt
plt.plot([1,2,3,4], [1,4,9,16], "ro")
plt.axis([0, 6, 0, 20])
plt.ylabel('some numbers')
plt.xlabel('times')
plt.show()
Explanation: Nếu được cung cấp 2 list thì: plot(xvalues, yvalues). Tham số thứ 3 là format string biểu thị màu và line type của plot. axis method dùng để xác định khoảng (viewport) của các trục [minx, maxx, miny, maxy]
End of explanation
import numpy as np
import matplotlib.pyplot as plt
# evenly sampled time at 200ms intervals
t = np.arange(0., 5., 0.2)
# red dashes, blue squares and green triangles
plt.plot(t, t, 'r--', t, t**2, 'bs', t, t**3, 'g^')
plt.show()
Explanation: Có thể sử dụng numpy để truyền tham số cho plot(). Và có thể vẽ nhiều plot cùng lúc
End of explanation
import matplotlib.pyplot as plt
# Use keyword args
line, = plt.plot([1,2,3,4], linewidth=2.0)
# Use the setter methods of a Line2D instance
line.set_antialiased(False)
# Use the setp() command
plt.setp(line, color='r', linewidth=6.0)
plt.ylabel('some numbers')
plt.show()
Explanation: Controlling line properties
Line có một số thuộc tính:
- linewidth
- dash style
- antialiased
- ...
Xem thêm: http://matplotlib.org/api/lines_api.html#matplotlib.lines.Line2D
Có một số cách để set thuộc tính cho line:
- Use keyword args: plt.plot(x, y, linewidth=2.0)
- Use the setter methods of a Line2D instance: line.set_antialiased(False)
- Use the setp() command: plt.setp(lines, color='r', linewidth=2.0)
End of explanation |
13,963 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Q 1 (function practice)
Let's practice functions. Here's a simple function that takes a string and returns a list of all the 4 letter words
Step1: Write a version of this function that takes a second argument, n, that is the word length we want to search for
Q 2 (primes)
A prime number is divisible only by 1 and itself. We want to write a function that takes a positive integer, n, and finds all of the primes up to that number.
A simple (although not very fast) way to find the primes is to start at 1, and build a list of primes by checking if the current number is divisible by any of the previously found primes. If it is not divisible by any earlier primes, then it is a prime.
The modulus operator, % could be helpful here.
Q 3 (exceptions for error handling)
We want to safely convert a string into a float, int, or leave it as a string, depending on its contents. As we've already seen, python provides float() and int() functions for this
Step2: But these throw exceptions if the conversion is not possible
Step4: Notice that an int can be converted to a float, but if you convert a float to an int, you rise losing significant digits. A string cannot be converted to either.
your task
Write a function, convert_type(a) that takes a string a, and converts it to a float if it is a number with a decimal point, an int if it is an integer, or leaves it as a string otherwise, and returns the result. You'll want to use exceptions to prevent the code from aborting.
Q 4 (tic-tac-toe)
Here we'll write a simple tic-tac-toe game that 2 players can play. First we'll create a string that represents our game board
Step5: This board will look a little funny if we just print it—the spacing is set to look right when we replace the {} with x or o
Step6: and well use a dictionary to denote the status of each square, "x", "o", or empty, ""
Step7: Note that our {} placeholders in the board string have identifiers (the numbers in the {}). We can use these to match the variables we want to print to the placeholder in the string, regardless of the order in the format()
Step9: Here's an easy way to add the values of our dictionary to the appropriate squares in our game board. First note that each of the {} is labeled with a number that matches the keys in our dictionary. Python provides a way to unpack a dictionary into labeled arguments, using **
This lets us to write a function to show the tic-tac-toe board.
Step11: Now we need a function that asks a player for a move
Step13: your task
Using the functions defined above,
* initialize_board()
* show_board()
* get_move()
fill in the function play_game() below to complete the game, asking for the moves one at a time, alternating between player 1 and 2 | Python Code:
def four_letter_words(message):
words = message.split()
four_letters = [w for w in words if len(w) == 4]
return four_letters
message = "The quick brown fox jumps over the lazy dog"
print(four_letter_words(message))
Explanation: Q 1 (function practice)
Let's practice functions. Here's a simple function that takes a string and returns a list of all the 4 letter words:
End of explanation
a = "2.0"
b = float(a)
print(b, type(b))
Explanation: Write a version of this function that takes a second argument, n, that is the word length we want to search for
Q 2 (primes)
A prime number is divisible only by 1 and itself. We want to write a function that takes a positive integer, n, and finds all of the primes up to that number.
A simple (although not very fast) way to find the primes is to start at 1, and build a list of primes by checking if the current number is divisible by any of the previously found primes. If it is not divisible by any earlier primes, then it is a prime.
The modulus operator, % could be helpful here.
Q 3 (exceptions for error handling)
We want to safely convert a string into a float, int, or leave it as a string, depending on its contents. As we've already seen, python provides float() and int() functions for this:
End of explanation
a = "this is a string"
b = float(a)
a = "1.2345"
b = int(a)
print(b, type(b))
b = float(a)
print(b, type(b))
Explanation: But these throw exceptions if the conversion is not possible
End of explanation
board =
{s1:^3} | {s2:^3} | {s3:^3}
-----+-----+-----
{s4:^3} | {s5:^3} | {s6:^3}
-----+-----+----- 123
{s7:^3} | {s8:^3} | {s9:^3} 456
789
Explanation: Notice that an int can be converted to a float, but if you convert a float to an int, you rise losing significant digits. A string cannot be converted to either.
your task
Write a function, convert_type(a) that takes a string a, and converts it to a float if it is a number with a decimal point, an int if it is an integer, or leaves it as a string otherwise, and returns the result. You'll want to use exceptions to prevent the code from aborting.
Q 4 (tic-tac-toe)
Here we'll write a simple tic-tac-toe game that 2 players can play. First we'll create a string that represents our game board:
End of explanation
print(board)
Explanation: This board will look a little funny if we just print it—the spacing is set to look right when we replace the {} with x or o
End of explanation
play = {}
def initialize_board(play):
for n in range(9):
play["s{}".format(n+1)] = ""
initialize_board(play)
play
Explanation: and well use a dictionary to denote the status of each square, "x", "o", or empty, ""
End of explanation
a = "{s1:} {s2:}".format(s2=1, s1=2)
a
Explanation: Note that our {} placeholders in the board string have identifiers (the numbers in the {}). We can use these to match the variables we want to print to the placeholder in the string, regardless of the order in the format()
End of explanation
def show_board(play):
display the playing board. We take a dictionary with the current state of the board
We rely on the board string to be a global variable
print(board.format(**play))
show_board(play)
Explanation: Here's an easy way to add the values of our dictionary to the appropriate squares in our game board. First note that each of the {} is labeled with a number that matches the keys in our dictionary. Python provides a way to unpack a dictionary into labeled arguments, using **
This lets us to write a function to show the tic-tac-toe board.
End of explanation
def get_move(n, xo, play):
ask the current player, n, to make a move -- make sure the square was not
already played. xo is a string of the character (x or o) we will place in
the desired square
valid_move = False
while not valid_move:
idx = input("player {}, enter your move (1-9)".format(n))
if play["s{}".format(idx)] == "":
valid_move = True
else:
print("invalid: {}".format(play["s{}".format(idx)]))
play["s{}".format(idx)] = xo
help(get_move)
Explanation: Now we need a function that asks a player for a move:
End of explanation
def play_game():
play a game of tic-tac-toe
Explanation: your task
Using the functions defined above,
* initialize_board()
* show_board()
* get_move()
fill in the function play_game() below to complete the game, asking for the moves one at a time, alternating between player 1 and 2
End of explanation |
13,964 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Experimenting with CV Scores
CVScores displays cross validation scores as a bar chart with the
average of the scores as a horizontal line.
Step2: Classification
Step3: Regression | Python Code:
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
import os
from yellowbrick.download import download_all
## The path to the test data sets
FIXTURES = os.path.join(os.getcwd(), "data")
## Dataset loading mechanisms
datasets = {
"bikeshare": os.path.join(FIXTURES, "bikeshare", "bikeshare.csv"),
"concrete": os.path.join(FIXTURES, "concrete", "concrete.csv"),
"credit": os.path.join(FIXTURES, "credit", "credit.csv"),
"energy": os.path.join(FIXTURES, "energy", "energy.csv"),
"game": os.path.join(FIXTURES, "game", "game.csv"),
"mushroom": os.path.join(FIXTURES, "mushroom", "mushroom.csv"),
"occupancy": os.path.join(FIXTURES, "occupancy", "occupancy.csv"),
"spam": os.path.join(FIXTURES, "spam", "spam.csv"),
}
def load_data(name, download=True):
Loads and wrangles the passed in dataset by name.
If download is specified, this method will download any missing files.
# Get the path from the datasets
path = datasets[name]
# Check if the data exists, otherwise download or raise
if not os.path.exists(path):
if download:
download_all()
else:
raise ValueError((
"'{}' dataset has not been downloaded, "
"use the download.py module to fetch datasets"
).format(name))
# Return the data frame
return pd.read_csv(path)
Explanation: Experimenting with CV Scores
CVScores displays cross validation scores as a bar chart with the
average of the scores as a horizontal line.
End of explanation
from sklearn.naive_bayes import MultinomialNB
from sklearn.model_selection import StratifiedKFold
from yellowbrick.model_selection import CVScores
room = load_data("occupancy")
features = ["temperature", "relative humidity", "light", "C02", "humidity"]
# Extract the numpy arrays from the data frame
X = room[features].values
y = room.occupancy.values
# Create a new figure and axes
_, ax = plt.subplots()
# Create a cross-validation strategy
cv = StratifiedKFold(12)
# Create the cv score visualizer
oz = CVScores(
MultinomialNB(), ax=ax, cv=cv, scoring='f1_weighted'
)
oz.fit(X, y)
oz.poof()
Explanation: Classification
End of explanation
from sklearn.linear_model import Ridge
from sklearn.model_selection import KFold
energy = load_data("energy")
targets = ["heating load", "cooling load"]
features = [col for col in energy.columns if col not in targets]
X = energy[features]
y = energy[targets[1]]
# Create a new figure and axes
_, ax = plt.subplots()
cv = KFold(12)
oz = CVScores(
Ridge(), ax=ax, cv=cv, scoring='r2'
)
oz.fit(X, y)
oz.poof()
Explanation: Regression
End of explanation |
13,965 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation. | Python Code:
# Run some setup code for this notebook.
k-Nearest Neighbor (kNN) exercise
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
Explanation: k-Nearest Neighbor (kNN) exercise
Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the assignments page on the course website.
The kNN classifier consists of two stages:
During training, the classifier takes the training data and simply remembers it
During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
End of explanation
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
Explanation: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
First we must compute the distances between all test examples and all train examples.
Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are Ntr training examples and Nte test examples, this stage should result in a Nte x Ntr matrix where each element (i,j) is the distance between the i-th test and j-th train example.
First, open cs231n/classifiers/k_nearest_neighbor.py and implement the function compute_distances_two_loops that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
End of explanation
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Inline Question #1: Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
What in the data is the cause behind the distinctly bright rows?
What causes the columns?
Your Answer: fill this in.
End of explanation
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5:
End of explanation
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
Explanation: You should expect to see a slightly better performance than with k = 1.
End of explanation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
pass
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 1
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Explanation: Cross-validation
We have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
End of explanation |
13,966 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Predictions generated by predictions_0 function have an accuracy of 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
predictions.append(passenger['Sex'] == 'female')
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Predictions generated by predictions_1 function have an accuracy of 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
if (passenger['Sex'] == 'female') or (passenger['Age'] < 10):
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex', ["Pclass == 3"])
Explanation: Answer: Predictions generated by predictions_2 function have an accuracy of 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model considering multiple features with an accuracy of at least 80%:
- If a passenger travels first or second class, it survives if any of the conditions below is true:
- sex is female;
- age is smaller than 10;
- If a passenger travels third class, it survives if its age is smaller than 10 and at most 2 of its siblings is aboard.
predictions = []
for _, passenger in data.iterrows():
if (passenger['Pclass'] != 3) and (passenger['Sex'] == 'female' or passenger['Age'] < 10):
predictions.append(1)
elif (passenger['Pclass'] == 3) and (passenger['Age'] < 10 and passenger['SibSp'] < 3):
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation |
13,967 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Explaining quantitative measures of fairness
This hands-on article connects explainable AI methods with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. By using SHAP (a popular explainable AI tool) we can decompose measures of fairness and allocate responsibility for any observed disparity among each of the model's input features. Explaining these quantitative fairness metrics can reduce the concerning tendency to rely on them as opaque standards of fairness, and instead promote their informed use as tools for understanding how model behavior differs between groups.
Quantitative fairness metrics seek to bring mathematical precision to the definition of fairness in machine learning [1]. Definitions of fairness however are deeply rooted in human ethical principles, and so on value judgements that often depend critically on the context in which a machine learning model is being used. This practical dependence on value judgements manifests itself in the mathematics of quantitative fairness measures as a set of trade-offs between sometimes mutually incompatible definitions of fairness [2]. Since fairness relies on context-dependent value judgements it is dangerous to treat quantitative fairness metrics as opaque black-box measures of fairness, since doing so may obscure important value judgment choices.
<!--This article covers
Step1: <!--## Scenario A
Step2: Now we can use SHAP to decompose the model output among each of the model's input features and then compute the demographic parity difference on the component attributed to each feature. As noted above, because the SHAP values sum up to the model's output, the sum of the demographic parity differences of the SHAP values for each feature sum up to the demographic parity difference of the whole model. This means that the sum of the bars below equals the bar above (the demographic parity difference of our baseline scenario model).
Step3: Scenario B
Step4: If this were a real application, this demographic parity difference might trigger an in-depth analysis of the model to determine what might be causing the disparity. While this investigation is challenging given just a single demographic parity difference value, it is much easier given the per-feature demographic parity decomposition based on SHAP. Using SHAP we can see there is a significant bias coming from the reported income feature that is increasing the risk of women disproportionately to men. This allows us to quickly identify which feature has the reporting bias that is causing our model to violate demographic parity
Step5: It is important to note at this point how our assumptions can impact the interpretation of SHAP fairness explanations. In our simulated scenario we know that women actually have identical income profiles to men, so when we see that the reported income feature is biased lower for women than for men, we know that has come from a bias in the measurement errors in the reported income feature. The best way to address this problem would be figure out how to debias the measurement errors in the reported income feature. Doing so would create a more accurate model that also has less demographic disparity. However, if we instead assume that women actually are making less money than men (and it is not just a reporting error), then we can't just "fix" the reported income feature. Instead we have to carefully consider how best to account for real differences in default risk between two protected groups. It is impossible to determine which of these two situations is happening using just the SHAP fairness explanation, since in both cases the reported income feature will be responsible for an observed disparity between the predicted risks of men and women.
Scenario C
Step6: And as we would hope, the SHAP explanations correctly highlight the late payments feature as the cause of the model's demographic parity difference, as well as the direction of the effect
Step7: Scenario D
Step8: We also see no evidence of any demographic parity differences in the SHAP explanations
Step9: Scenario E
Step10: When we explain the demographic parity difference with SHAP we see that, as expected, the brand X purchase score feature drives the difference. In this case it is not because we have a bias in how we measure the brand X purchase score feature, but rather because we have a bias in our training label that gets captured by any input features that are sufficiently correlated with sex
Step11: Scenario F
Step12: However, if we look at the SHAP explanation of the demographic parity difference we clearly see both (counteracting) biases
Step13: Identifying multiple potentially offsetting bias effects can be important since while on average there is no disparate impact on men or women, there is disparate impact on individuals. For example, in this simulation women who have not shopped at brand X will receive a lower credit score than they should have because of the bias present in job history reporting.
How introducing a protected feature can help distinguish between label bias and feature bias
In scenario F we were able to pick apart two distict forms of bias, one coming from job history under-reporting and one coming from default rate under-reporting. However, the bias from default rate under-reporting was not attributed to the default rate label, but rather to the brand X purchase score feature that happened to be correlated with sex. This still leaves us with some uncertainty about the true sources of demographic parity differences, since any difference attributed to an input feature could be due to an issue with that feature, or due to an issue with the training labels.
It turns out that in this case we can help disentangle label bias from feature bias by introducing sex as a variable directly into the model. The goal of introducing sex as an input feature is to cause the label bias to fall entirely on the sex feature, leaving the feature biases untouched. So we can then distinguish between label biases and feature biases by comparing the results of scenario F above to our new scenario G below. This of course creates an even stronger demographic parity difference than we had before, but that is fine since our goal here is not bias mitigation, but rather bias understanding.
Step14: The SHAP explanation for scenario G shows that all of the demographic parity difference that used to be attached to the brand X purchase score feature in scenario F has now moved to the sex feature, while none of the demographic parity difference attached to the job history feature in scenario F has moved. This can be interpreted to mean that all of the disparity attributed to brand X purchase score in scenario F was due to label bias, while all of the disparity attributed to job history in scenario F was due to feature bias. | Python Code:
# here we define a function that we can call to execute our simulation under
# a variety of different alternative scenarios
import scipy as sp
import numpy as np
import matplotlib.pyplot as pl
import pandas as pd
import shap
%config InlineBackend.figure_format = 'retina'
def run_credit_experiment(N, job_history_sex_impact=0, reported_income_sex_impact=0, income_sex_impact=0,
late_payments_sex_impact=0, default_rate_sex_impact=0,
include_brandx_purchase_score=False, include_sex=False):
np.random.seed(0)
sex = np.random.randint(0, 2, N) == 1 # randomly half men and half women
# four hypothetical causal factors influence customer quality
# they are all scaled to the same units between 0-1
income_stability = np.random.rand(N)
income_amount = np.random.rand(N)
if income_sex_impact > 0:
income_amount -= income_sex_impact/90000 * sex * np.random.rand(N)
income_amount -= income_amount.min()
income_amount /= income_amount.max()
spending_restraint = np.random.rand(N)
consistency = np.random.rand(N)
# intuitively this product says that high customer quality comes from simultaneously
# being strong in all factors
customer_quality = income_stability * income_amount * spending_restraint * consistency
# job history is a random function of the underlying income stability feature
job_history = np.maximum(
10 * income_stability + 2 * np.random.rand(N) - job_history_sex_impact * sex * np.random.rand(N)
, 0)
# reported income is a random function of the underlying income amount feature
reported_income = np.maximum(
10000 + 90000*income_amount + np.random.randn(N) * 10000 - \
reported_income_sex_impact * sex * np.random.rand(N)
, 0)
# credit inquiries is a random function of the underlying spending restraint and income amount features
credit_inquiries = np.round(6 * np.maximum(-spending_restraint + income_amount, 0)) + \
np.round(np.random.rand(N) > 0.1)
# credit inquiries is a random function of the underlying consistency and income stability features
late_payments = np.maximum(
np.round(3 * np.maximum((1-consistency) + 0.2 * (1-income_stability), 0)) + \
np.round(np.random.rand(N) > 0.1) - np.round(late_payments_sex_impact * sex * np.random.rand(N))
, 0)
# bundle everything into a data frame and define the labels based on the default rate and customer quality
X = pd.DataFrame({
"Job history": job_history,
"Reported income": reported_income,
"Credit inquiries": credit_inquiries,
"Late payments": late_payments
})
default_rate = 0.40 + sex * default_rate_sex_impact
y = customer_quality < np.percentile(customer_quality, default_rate * 100)
if include_brandx_purchase_score:
brandx_purchase_score = sex + 0.8 * np.random.randn(N)
X["Brand X purchase score"] = brandx_purchase_score
if include_sex:
X["Sex"] = sex + 0
# build model
import xgboost
model = xgboost.XGBClassifier(max_depth=1, n_estimators=500, subsample=0.5, learning_rate=0.05)
model.fit(X, y)
# build explanation
import shap
explainer = shap.TreeExplainer(model, shap.sample(X, 100))
shap_values = explainer.shap_values(X)
return shap_values, sex, X, explainer.expected_value
Explanation: Explaining quantitative measures of fairness
This hands-on article connects explainable AI methods with fairness measures and shows how modern explainability methods can enhance the usefulness of quantitative fairness metrics. By using SHAP (a popular explainable AI tool) we can decompose measures of fairness and allocate responsibility for any observed disparity among each of the model's input features. Explaining these quantitative fairness metrics can reduce the concerning tendency to rely on them as opaque standards of fairness, and instead promote their informed use as tools for understanding how model behavior differs between groups.
Quantitative fairness metrics seek to bring mathematical precision to the definition of fairness in machine learning [1]. Definitions of fairness however are deeply rooted in human ethical principles, and so on value judgements that often depend critically on the context in which a machine learning model is being used. This practical dependence on value judgements manifests itself in the mathematics of quantitative fairness measures as a set of trade-offs between sometimes mutually incompatible definitions of fairness [2]. Since fairness relies on context-dependent value judgements it is dangerous to treat quantitative fairness metrics as opaque black-box measures of fairness, since doing so may obscure important value judgment choices.
<!--This article covers:
1. How SHAP can be used to explain various measures of model fairness.
2. What SHAP fairness explanations look like in various simulated scenarios.
3. How introducing a protected feature can help distiguish between label bias vs. feature bias.
4. Things you can't learn from a SHAP fairness explanation.-->
How SHAP can be used to explain various measures of model fairness
This article is not about how to choose the "correct" measure of model fairness, but rather about explaining whichever metric you have chosen. Which fairness metric is most appropriate depends on the specifics of your context, such as what laws apply, how the output of the machine learning model impacts people, and what value you place on various outcomes and hence tradeoffs. Here we will use the classic demographic parity metric, since it is simple and closely connected to the legal notion of disparate impact. The same analysis can also be applied to other metrics such as decision theory cost, equalized odds, equal opportunity, or equal quality of service. Demographic parity states that the output of the machine learning model should be equal between two or more groups. The demographic parity difference is then a measure of how much disparity there is between model outcomes in two groups of samples.
Since SHAP decomposes the model output into feature attributions with the same units as the original model output, we can first decompose the model output among each of the input features using SHAP, and then compute the demographic parity difference (or any other fairness metric) for each input feature seperately using the SHAP value for that feature. Because the SHAP values sum up to the model's output, the sum of the demographic parity differences of the SHAP values also sum up to the demographic parity difference of the whole model.
<!--To will not explain
The danger of treating quantitative fairness metrics as opaque, black-box measures of fairness is strikingly similar to a related problem of treating machine learning models themselves as opaque, black-box predictors. While using a black-box is reasonable in many cases, important problems and assumptions can often be hidden (and hence ignored) when users don't understand the reasons behind a model's behavior \cite{ribeiro2016should}. In response to this problem many explainable AI methods have been developed to help users understand the behavior of modern complex models \cite{vstrumbelj2014explaining,ribeiro2016should,lundberg2017unified}. Here we explore how to apply explainable AI methods to quantitative fairness metrics.-->
What SHAP fairness explanations look like in various simulated scenarios
To help us explore the potential usefulness of explaining quantitative fairness metrics we consider a simple simulated scenario based on credit underwriting. In our simulation there are four underlying factors that drive the risk of default for a loan: income stability, income amount, spending restraint, and consistency. These underlying factors are not observed, but they variously influence four different observable features: job history, reported income, credit inquiries, and late payments. Using this simulation we generate random samples and then train a non-linear XGBoost classifier to predict the probability of default. The same process also works for any other model type supported by SHAP, just remember that explanations of more complicated models hide more of the model's details.
By introducing sex-specific reporting errors into a fully specified simulation we can observe how the biases caused by these errors are captured by our chosen fairness metric. In our simulated case the true labels (will default on a loan) are statistically independent of sex (the sensitive class we use to check for fairness). So any disparity between men and women means one or both groups are being modeled incorrectly due to feature measurement errors, labeling errors, or model errors. If the true labels you are predicting (which might be different than the training labels you have access to) are not statistically independent of the sensitive feature you are considering, then even a perfect model with no errors would fail demographic parity. In these cases fairness explanations can help you determine which sources of demographic disparity are valid.
<!--This article explores how we can use modern explainable AI tools to enhance traditional quantitative measures of model fairness. It is practical and hands-on, so feel free to follow along in the associated [notebook]. I assume you have a basic understanding of how people measure fairness for machine learning models. If you have never before considered fairness in the context of machine learning, then I recommend starting with a basic introduction such as XXX. I am not writing this Here I do not beforeIt is not meant to be a definitite One futher disclaimer is that as the author of SHAP (a popular explainable AI tool) I am very familar with the strengths and weaknesses of explainable AI tools, but I do not consider myself a fairness expert. So consider this a thought-provoking guide on how explainable AI tools can enhance quantitative measures of model fairness
I consider myself fairly well informed about explainable AI, but I
Questions about fairness and equal treatment naturally arise whenever the outputs of a machine learning model impact people. For sensitive use-cases such as credit underwriting or crime prediction there are even laws that govern certain aspects of fairness. While fairness issues are not new, the rising popularily of machine learning model
Legal fairness protections are even legally encorced for sensitive use-cases such as credit underwriting or crime prediction, but is also important in many other situations such as quality of service, or you might not initially to consider whenever you are using m Quantifying the fairness of a machine learning model has recently received considerable attention in the research community, and many quantitative fairness metrics have been proposed. In parallel to this work on fairness, explaining the outputs of a machine learning model has also received considerable research attention. %Explainability is intricately connected to fairness, since good explanations enable users to understand a model's behavior and so judge its fairness.
Here we connect explainability methods with fairness measures and show how recent explainability methods can enhance the usefulness of quantitative fairness metrics by decomposing them among the model's input features. Explaining quantitative fairness metrics can reduce our tendency to rely on them as opaque standards of fairness, and instead promote their informed use as tools for understanding model behavior between groups.
This notebook explores how SHAP can be used to explain quantitative measures of fairness, and so enhance their usefulness. To do this we consider a simple simulated scenario based on credit underwriting. In the simulation below there are four underlying factors that drive the risk of default for a loan: income stability, income amount, spending restraint, and consistency. These underlying factors are not observed, but they influence four different observable features in various ways: job history, reported income, credit inquiries, and late payments. Using this simulation we generate random samples and then train a non-linear gradient boosting tree classifier to predict the probability of default.
By introducing sex-specific reporting errors into the simulation we can observe how the biases caused by these errors are captured by fairness metrics. For this analysis we use the classic statistical parity metric, though the same analysis works with other metrics. Note that for a more detailed description of fairness metrics you can check out the [fairlearn package's documentation](https://github.com/fairlearn/fairlearn/blob/master/TERMINOLOGY.md#fairness-of-ai-systems).-->
End of explanation
N = 10000
shap_values_A, sex_A, X_A, ev_A = run_credit_experiment(N)
model_outputs_A = ev_A + shap_values_A.sum(1)
glabel = "Demographic parity difference\nof model output for women vs. men"
xmin = -0.8
xmax = 0.8
shap.group_difference_plot(shap_values_A.sum(1), sex_A, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: <!--## Scenario A: No reporting errors
As a baseline experiment we refrain from introducing any sex-specific reporting errors. This results in no significant statistical parity difference between the credit score of men and women:-->
Scenario A: No reporting errors
Our first experiment is a simple baseline check where we refrain from introducing any sex-specific reporting errors. While we could use any model output to measure demographic parity, we use the continuous log-odds score from a binary XGBoost classifier. As expected, this baseline experiment results in no significant demographic parity difference between the credit scores of men and women. We can see this by plotting the difference between the average credit score for women and men as a bar plot and noting that zero is close to the margin of error (note that negative values mean women have a lower average predicted risk than men, and positive values mean that women have a higher average predicted risk than men):
End of explanation
slabel = "Demographic parity difference\nof SHAP values for women vs. men"
shap.group_difference_plot(shap_values_A, sex_A, X_A.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: Now we can use SHAP to decompose the model output among each of the model's input features and then compute the demographic parity difference on the component attributed to each feature. As noted above, because the SHAP values sum up to the model's output, the sum of the demographic parity differences of the SHAP values for each feature sum up to the demographic parity difference of the whole model. This means that the sum of the bars below equals the bar above (the demographic parity difference of our baseline scenario model).
End of explanation
shap_values_B, sex_B, X_B, ev_B = run_credit_experiment(N, reported_income_sex_impact=30000)
model_outputs_B = ev_B + shap_values_B.sum(1)
shap.group_difference_plot(shap_values_B.sum(1), sex_B, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: Scenario B: An under-reporting bias for women's income
In our baseline scenario we designed a simulation where sex had no impact on any of the features or labels used by the model. Here in scenario B we introduce an under-reporting bias for women's income into the simulation. The point here is not how realistic it would be for women's income to be under-reported in the real-world, but rather how we can identify that a sex-specific bias has been introduced and understand where it came from. By plotting the difference in average model output (default risk) between women and men we can see that the income under-reporting bias has created a significant demographic parity difference where women now have a higher risk of default than men:
End of explanation
shap.group_difference_plot(shap_values_B, sex_B, X_B.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: If this were a real application, this demographic parity difference might trigger an in-depth analysis of the model to determine what might be causing the disparity. While this investigation is challenging given just a single demographic parity difference value, it is much easier given the per-feature demographic parity decomposition based on SHAP. Using SHAP we can see there is a significant bias coming from the reported income feature that is increasing the risk of women disproportionately to men. This allows us to quickly identify which feature has the reporting bias that is causing our model to violate demographic parity:
End of explanation
shap_values_C, sex_C, X_C, ev_C = run_credit_experiment(N, late_payments_sex_impact=2)
model_outputs_C = ev_C + shap_values_C.sum(1)
shap.group_difference_plot(shap_values_C.sum(1), sex_C, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: It is important to note at this point how our assumptions can impact the interpretation of SHAP fairness explanations. In our simulated scenario we know that women actually have identical income profiles to men, so when we see that the reported income feature is biased lower for women than for men, we know that has come from a bias in the measurement errors in the reported income feature. The best way to address this problem would be figure out how to debias the measurement errors in the reported income feature. Doing so would create a more accurate model that also has less demographic disparity. However, if we instead assume that women actually are making less money than men (and it is not just a reporting error), then we can't just "fix" the reported income feature. Instead we have to carefully consider how best to account for real differences in default risk between two protected groups. It is impossible to determine which of these two situations is happening using just the SHAP fairness explanation, since in both cases the reported income feature will be responsible for an observed disparity between the predicted risks of men and women.
Scenario C: An under-reporting bias for women's late payments
To verify that SHAP demographic parity explanations can correctly detect disparities regardless of the direction of effect or source feature, we repeat our previous experiment but instead of an under-reporting bias for income, we introduce an under-reporting bias for women's late payment rates. This results in a significant demographic parity difference for the model's output where now women have a lower average default risk than men:
End of explanation
shap.group_difference_plot(shap_values_C, sex_C, X_C.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: And as we would hope, the SHAP explanations correctly highlight the late payments feature as the cause of the model's demographic parity difference, as well as the direction of the effect:
End of explanation
shap_values_D, sex_D, X_D, ev_D = run_credit_experiment(N, default_rate_sex_impact=-0.1) # 20% change
model_outputs_D = ev_D + shap_values_D.sum(1)
shap.group_difference_plot(shap_values_D.sum(1), sex_D, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: Scenario D: An under-reporting bias for women's default rates
The experiments above focused on introducing reporting errors for specific input features. Next we consider what happens when we introduce reporting errors on the training labels through an under-reporting bias on women's default rates (which means defaults are less likely to be reported for women than men). Interestingly, for our simulated scenario this results in no significant demographic parity differences in the model's output:
End of explanation
shap.group_difference_plot(shap_values_D, sex_D, X_D.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: We also see no evidence of any demographic parity differences in the SHAP explanations:
End of explanation
shap_values_E, sex_E, X_E, ev_E = run_credit_experiment(
N, default_rate_sex_impact=-0.1, include_brandx_purchase_score=True
)
model_outputs_E = ev_E + shap_values_E.sum(1)
shap.group_difference_plot(shap_values_E.sum(1), sex_E, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: Scenario E: An under-reporting bias for women's default rates, take 2
It may at first be surprising that no demographic parity differences were caused when we introduced an under-reporting bias on women's default rates. This is because none of the four features in our simulation are significantly correlated with sex, so none of them could be effectively used to model the bias we introduced into the training labels. If we now instead provide a new feature (brand X purchase score) to the model that is correlated with sex, then we see a demographic parity difference emerge as that feature is used by the model to capture the sex-specific bias in the training labels:
End of explanation
shap.group_difference_plot(shap_values_E, sex_E, X_E.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: When we explain the demographic parity difference with SHAP we see that, as expected, the brand X purchase score feature drives the difference. In this case it is not because we have a bias in how we measure the brand X purchase score feature, but rather because we have a bias in our training label that gets captured by any input features that are sufficiently correlated with sex:
End of explanation
shap_values_F, sex_F, X_F, ev_F = run_credit_experiment(
N, default_rate_sex_impact=-0.1, include_brandx_purchase_score=True,
job_history_sex_impact=2
)
model_outputs_F = ev_F + shap_values_F.sum(1)
shap.group_difference_plot(shap_values_F.sum(1), sex_F, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: Scenario F: Teasing apart multiple under-reporting biases
When there is a single cause of reporting bias then both the classic demographic parity test on the model's output, and the SHAP explanation of the demographic parity test capture the same bias effect (though the SHAP explanation can often have more statistical significance since it isolates the feature causing the bias). But what happens when there are multiple causes of bias occurring in a dataset? In this experiment we introduce two such biases, an under-reporting of women's default rates, and an under-reporting of women's job history. These biases tend to offset each other in the global average and so a demographic parity test on the model's output shows no measurable disparity:
End of explanation
shap.group_difference_plot(shap_values_F, sex_F, X_F.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: However, if we look at the SHAP explanation of the demographic parity difference we clearly see both (counteracting) biases:
End of explanation
shap_values_G, sex_G, X_G, ev_G = run_credit_experiment(
N, default_rate_sex_impact=-0.1, include_brandx_purchase_score=True,
job_history_sex_impact=2, include_sex=True
)
model_outputs_G = ev_G + shap_values_G.sum(1)
shap.group_difference_plot(shap_values_G.sum(1), sex_G, xmin=xmin, xmax=xmax, xlabel=glabel)
Explanation: Identifying multiple potentially offsetting bias effects can be important since while on average there is no disparate impact on men or women, there is disparate impact on individuals. For example, in this simulation women who have not shopped at brand X will receive a lower credit score than they should have because of the bias present in job history reporting.
How introducing a protected feature can help distinguish between label bias and feature bias
In scenario F we were able to pick apart two distict forms of bias, one coming from job history under-reporting and one coming from default rate under-reporting. However, the bias from default rate under-reporting was not attributed to the default rate label, but rather to the brand X purchase score feature that happened to be correlated with sex. This still leaves us with some uncertainty about the true sources of demographic parity differences, since any difference attributed to an input feature could be due to an issue with that feature, or due to an issue with the training labels.
It turns out that in this case we can help disentangle label bias from feature bias by introducing sex as a variable directly into the model. The goal of introducing sex as an input feature is to cause the label bias to fall entirely on the sex feature, leaving the feature biases untouched. So we can then distinguish between label biases and feature biases by comparing the results of scenario F above to our new scenario G below. This of course creates an even stronger demographic parity difference than we had before, but that is fine since our goal here is not bias mitigation, but rather bias understanding.
End of explanation
shap.group_difference_plot(shap_values_G, sex_G, X_G.columns, xmin=xmin, xmax=xmax, xlabel=slabel)
Explanation: The SHAP explanation for scenario G shows that all of the demographic parity difference that used to be attached to the brand X purchase score feature in scenario F has now moved to the sex feature, while none of the demographic parity difference attached to the job history feature in scenario F has moved. This can be interpreted to mean that all of the disparity attributed to brand X purchase score in scenario F was due to label bias, while all of the disparity attributed to job history in scenario F was due to feature bias.
End of explanation |
13,968 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
GEOL351 Lab 9
Multiple Climate Equilibria
Step1: Outgoing Long-wave radiation
Let us define a function that will compute outgoing longwave emission from the planet for a given emission temperature $T$ (dropping the subscript for convenience).
Step2: The radiating pressure pRad, in mb, is passed as the
optional second parameter. It can be used to
pass parameters of other OLR expressions, e.g. CO2 concentrations. Let's see how this depends on T.
Incoming solar radiation
Similarly, let's define a function for the albedo, $\alpha(T)$.
Step3: Notice how the function uses "global" arguments alpha_ice, alpha_ice, T1, T2. We need to define those for the function to work. Let's do this, along with defining pRad.
Step4: The next cell generates lists of values for OLR, the next flux, and the incoming radiation given a range of temperature values from 200 to 340 K.
Step5: Time to plot a few things! First, the albedo
Step6: Energy Balance
Next, let's look at energy balance. What's coming in, what's going out?
Step7: Another way to look at this is to graph the difference between these curves (which we called $G(T)$ above)
Step8: Question 2
Step9: Once again, wherever this balance function (blue curve) inersects the incident solar radiation (dashed green curve), we get one or more equilibrium temperature(s). From this it is easy to find the values of the solar constant for which we can have 1, 2 or 3 equilibria. Note that some of the branches in this diagram are unstable. For instance, as you vary L from 1370 up to 3000, say, at some point you will leave the lower branch and land abruptly and without warning, on the upper branch. This is one reason why bifurcations are relevant to climate risk assessment.
Question 4
The current greenhouse effect yields a radiative pressure pRad in the neighborhood of 670mb. Repeat the calculations above for this value of $p_{Rad}$, (it will be cleaner if you copy and paste the relevant cells below).
For which value of L, approximately, would we enter a stable snowball state?
(to answer this, redraw the bifurcation diagram above with $p_{Rad}=670$mb. For $L = 1370 W.m^{-2}$ you should find 3 possible states. Assume that you are on the upper, toasty branch. For which value of $L$ would you fall onto the lower, icy branch?)
Step10: Answer 4
write your answer here
Finally, we could ask what kind of climate would be obtained by varying the atmosphere's opacity to outgoing longwave radiation (i.e. its concentration of greenhouse gases). We seek a stability diagram similar to the one above, but cast in terms of radiative pressure $p_{rad}$ instead of $L$.
Question 5
What is the effect of lowering $p_{rad}$ on the surface temperature $T_s$?
You may answer using purely mathematical, physical, or heuristic arguments (one way to do this is to vary $p_{rad}$ and see how it affects the OLR).
Answer 5
write your answer here
Greenhouse Bifurcation
To draw the bifurcation diagram in terms of $p_{rad}$ we need a bit more machinery. Let's define a function that solves for radiative pressure given the solar flux and the temperature. (if you really want to know, it uses Newton-Raphson's method, but you needn't worry about that) | Python Code:
%matplotlib inline
# ensures that graphics display in the notebook
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import sys
sys.path.append("./CoursewareModules")
from ClimateUtilities import * # import Ray Pierrehumbert's climate utilities
import phys
import seaborn as sns
Explanation: GEOL351 Lab 9
Multiple Climate Equilibria: Icehouse vs Hothouse
Student Name : [WRITE YOUR NAME HERE]
In this lab, you will explore, tweak and tinker with the zero-dimensional climate model seen in class. This model, and much of the following lab, is strongly inspired by Ray Pierrehumbert's excellent book, Principles of Planetary Climate.
We will be working in the iPython / Jupyter notebook system. We like these because they are a form of literate programming in which we can mix textbook instruction and explanations with code that can also be run and edited. You will be asked to run some cells, tweak parameters and write down results. Please save the resulting notebook once you are done; this is what you will be handing out.
Think you can't code? Be reassured, we will guide you by the hand. The beauty of iPython notebooks is that they integrate the code very naturally in a scholarly document, and the most you will have to do is sliughtly edit the code and watch how this affects the results. This should allow you to focus on the dynamics, not so much the informatics.
Let's start with a little preliminary learning.
Part 0: iPython notebooks
Python
Haven't you always wanted to code? Thanks to Python, anyone with a modicum of common sense and a keyboard can do so. Python is a high-level language whose syntax makes it very difficult to be a bad programmer. For a few of the many reasons why you should pick up Python before your education is over, read this.
Markdown
You can document your iPython notebooks by making some cells into Markdown Website cells. Markdown is a way of formatting text that is supposed to be almost as readable un-rendered as when it is tidied up. If you look at the Markdown cells as source code (by double-clicking on them) you will see how the raw text looks. If you can deal with the unwieldy and ugly Microsoft Word, you can certainly handle Markdown.
You will be expected to turn in your lab for grading as a notebook, so you should write all your notes, observations, and concluions in the notebook.
Maths
In a browser, you can render beautiful equations using a javascript tool called Mathjax Website which is build into the iPython notebooks. It is based on a LaTeX engine, You can build in symbols to your text such as $\pi$ and $\epsilon$ if you use the \$ signs to indicate where your equations begin and end, and you know enough $\LaTeX$ try it here ! to get by.
Equations in 'display' mode are written like this (again look at the source for this cell to see what is used)
\[ e^{i\pi} + 1 = 0 \]
or even like this
\begin{equation}
%%
\nabla^4 \psi = \frac{\partial T}{\partial x}
%%
\end{equation}
Go back to the rendered form of the cell by 'running' it.
A zero-dimensional climate model
The dynamics are governed by conservation of energy, which takes the form of an ordinary linear equation:
$ C_s \frac{d T_e}{dt} = \left(1 -\alpha(T_e) \right) \frac{S_{\circ}}{4} - \sigma {T_e}^4 \equiv G(T_e)$ [Eq (1)]
with $T_e$ the emission temperature and the albedo $\alpha$ is a nonlinear function of climate state (indexed by $T_e$, which is therefore the state variable for the system). $\alpha$ is given by :
$\alpha(T) = \begin{cases}
\alpha_i & \text{for } T \leq T_i \
\alpha_0 + (\alpha_i- \alpha_0) \frac{(T-T_0)^2}{(T_i-T_0)^2} & \text{for } T_i < T < T_0 \
\alpha_0 & \text{for } T \geq T_0 \
\end{cases} $ [Eq (2)]
By definition, the emission of radiation to outer space takes place at some height $z_rad$ above the ground, corresponding to a pressure $p_\text{rad}$. The greenhouse effect can be understood as a tendency to raise the emission level, hence lower $p_\text{rad}$. Choosing $p_\text{rad} = 670 mb$ yields a climate not to dissimilar to our current one.
Question 1: Is the planet's emission of electromagnetic radiation in the longwave or shortwave domain? What does this depend on?
Answer 1 : PLEASE WRITE YOUR ANSWER HERE
The other relation we need is the relationship between the surface temperature $T_s$ and $T_e$:
$T_s = T_e \left(\frac{p_s}{p_\text{rad}} \right)^{\kappa}$ where $\kappa = 2/7$
If we assume that $p_s$ and $p_\text{rad}$ are both constant, working with $T_e$ is equivalent to working with $T_s$, which we shall do from now on.
Now, in order to get going, we need to set up a few preliminaries. First, let's import other bits and pieces of Python code to save us some time.
End of explanation
def OLR(T,param=None):
pRad = param
return phys.sigma * (T**4.)*(pRad/1000.)**(4.*2./7.)
Explanation: Outgoing Long-wave radiation
Let us define a function that will compute outgoing longwave emission from the planet for a given emission temperature $T$ (dropping the subscript for convenience).
End of explanation
def albedo(T):
if T < T1:
return alpha_ice
elif (T >= T1)&(T<=T2):
r = (T-T2)**2/(T2-T1)**2
return alpha_0 + (alpha_ice - alpha_0)*r
else:
return alpha_0
Explanation: The radiating pressure pRad, in mb, is passed as the
optional second parameter. It can be used to
pass parameters of other OLR expressions, e.g. CO2 concentrations. Let's see how this depends on T.
Incoming solar radiation
Similarly, let's define a function for the albedo, $\alpha(T)$.
End of explanation
L = 1.*1370.
alpha_ice = .6
alpha_0 = .2
T1 = 260. # in Kelvins
T2 = 290. # in Kelvins
pRad = 1000. # in mb
Explanation: Notice how the function uses "global" arguments alpha_ice, alpha_ice, T1, T2. We need to define those for the function to work. Let's do this, along with defining pRad.
End of explanation
Tlist = [200.+ 2.*i for i in range(70)]
SabsList = []
OLRlist = []
NetFluxlist = []
Glist = []
aList = []
for T in Tlist:
aList.append(albedo(T)) # albedo function
SabsList.append((L/4.)*(1.-albedo(T))) # incident solar values
OLRlist.append(OLR(T,pRad)) # outgoing longwave emissions
NetFluxlist.append((L/4.)*(1.-albedo(T)) - OLR(T,pRad)) # net flux
Glist.append(4.*OLR(T,pRad)/(1.-albedo(T))) # balance function
Explanation: The next cell generates lists of values for OLR, the next flux, and the incoming radiation given a range of temperature values from 200 to 340 K.
End of explanation
c1 = Curve()
c1.addCurve(Tlist)
c1.addCurve(aList,'Albedo','Alb')
c1.Xlabel = 'Temperature (K)'
c1.Ylabel = 'Alpha'
c1.PlotTitle = 'Planetary Albedo'
w1 = plot(c1)
plt.ylim((0, 0.8))
Explanation: Time to plot a few things! First, the albedo:
End of explanation
c1 = Curve()
c1.addCurve(Tlist)
c1.addCurve(SabsList,'Sabs','Absorbed Solar')
c1.addCurve(OLRlist,'OLR','Outgoing Longwave')
c1.Xlabel = 'Temperature (K)'
c1.Ylabel = 'Flux ($W/m^2$)'
c1.PlotTitle = 'Energy Balance Diagram'
w1 = plot(c1)
Explanation: Energy Balance
Next, let's look at energy balance. What's coming in, what's going out?
End of explanation
c2 = Curve()
c2.addCurve(Tlist)
c2.addCurve(NetFluxlist,'Net','Net Flux')
c2.addCurve([0. for i in range(len(Glist))],'Zero','Equilibrium')
c2.Xlabel = 'Temperature (K)'
c2.Ylabel = 'Net Flux ($W/m^2$)'
c2.PlotTitle = 'Stability diagram'
w2 = plot(c2)
Explanation: Another way to look at this is to graph the difference between these curves (which we called $G(T)$ above)
End of explanation
c3 = Curve()
c3.addCurve(Tlist)
c3.addCurve(Glist,'G','Balance Function')
c3.addCurve([L for i in range(len(Glist))],'S','Incident Solar')
c3.switchXY = True #Switches axis so it looks like a hysteresis plot
c3.PlotTitle = 'Bifurcation diagram, pRad = %f mb'%pRad
c3.Xlabel = 'Surface Temperature (K)'
c3.Ylabel = 'Solar Constant ($W/m^2$)'
w3 = plot(c3)
Explanation: Question 2:
Solving graphically, for which values of the solar luminosity L do we get 1, 2 or 3 equilibria?
Answer 2:
Write your answer here
What you just did, informally, is to draw a bifurcation diagram for the system. Now we can be a lot smarter about this, and instead of typing like monkeys hoping to recreate Shakespeare (or Pierrehumbert as the case may be), we can try to solve for $G(T) = 0$.
Question 3:
Setting $G(T) = 0$, derive an analytical expression for $L_{eq}$, the value of $L$ for which incoming solar radiation balances exactly outgoing longwave radiation.
Answer 3:
Write your answer here
Solar Bifurcation
Next we plot this value as a function of $T$. However, let's introduce a little twist: $T$ of course, is a consequence of the value of L and pRad, not a cause of it. So we should switch the X and Y axes so that we get $T = f(L)$, not $ L = f(T)$.
End of explanation
# perform your new calculations here
Explanation: Once again, wherever this balance function (blue curve) inersects the incident solar radiation (dashed green curve), we get one or more equilibrium temperature(s). From this it is easy to find the values of the solar constant for which we can have 1, 2 or 3 equilibria. Note that some of the branches in this diagram are unstable. For instance, as you vary L from 1370 up to 3000, say, at some point you will leave the lower branch and land abruptly and without warning, on the upper branch. This is one reason why bifurcations are relevant to climate risk assessment.
Question 4
The current greenhouse effect yields a radiative pressure pRad in the neighborhood of 670mb. Repeat the calculations above for this value of $p_{Rad}$, (it will be cleaner if you copy and paste the relevant cells below).
For which value of L, approximately, would we enter a stable snowball state?
(to answer this, redraw the bifurcation diagram above with $p_{Rad}=670$mb. For $L = 1370 W.m^{-2}$ you should find 3 possible states. Assume that you are on the upper, toasty branch. For which value of $L$ would you fall onto the lower, icy branch?)
End of explanation
def radPress(flux,T):
def g(p,const):
return OLR(const.T,p) - const.flux
root = newtSolve(g)
const= Dummy()
const.T = T
const.flux = flux
root.setParams(const)
return root(500.)
L = 1370.
TList = [220.+i for i in range(131)]
gList = [radPress((1.-albedo(T))*L/4.,T) for T in TList]
cG = Curve()
cG.addCurve(gList,'pRad')
cG.addCurve(TList,'T')
#Just for variety, here I've shown that instead of
# switching axes, you can just put the data into
# the Curve in a different order
cG.PlotTitle = 'Bifurcation diagram, L = %f W/m**2'%L
cG.Ylabel = 'Surface Temperature (K)'
cG.Xlabel = 'Radiating pressure (mb)'
cG.reverseX = True #Reverse pressure axis so warmer is to the right
plot(cG)
Explanation: Answer 4
write your answer here
Finally, we could ask what kind of climate would be obtained by varying the atmosphere's opacity to outgoing longwave radiation (i.e. its concentration of greenhouse gases). We seek a stability diagram similar to the one above, but cast in terms of radiative pressure $p_{rad}$ instead of $L$.
Question 5
What is the effect of lowering $p_{rad}$ on the surface temperature $T_s$?
You may answer using purely mathematical, physical, or heuristic arguments (one way to do this is to vary $p_{rad}$ and see how it affects the OLR).
Answer 5
write your answer here
Greenhouse Bifurcation
To draw the bifurcation diagram in terms of $p_{rad}$ we need a bit more machinery. Let's define a function that solves for radiative pressure given the solar flux and the temperature. (if you really want to know, it uses Newton-Raphson's method, but you needn't worry about that)
End of explanation |
13,969 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: データ増強
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: データセットをダウンロードする
このチュートリアルでは、tf_flowers データセットを使用します。便宜上、TensorFlow Dataset を使用してデータセットをダウンロードします。他のデータインポート方法に関する詳細は、画像読み込みのチュートリアルをご覧ください。
Step3: 花のデータセットには 5 つのクラスがあります。
Step4: データセットから画像を取得し、それを使用してデータ増強を実演してみましょう。
Step5: Keras 前処理レイヤーを使用する
リサイズとリスケール
Keras 前処理レイヤーを使用して、画像を一定の形状にサイズ変更し(tf.keras.layers.Resizing)、ピクセル値を再スケールする(tf.keras.layers.Rescaling)ことができます。
Step6: 注意
Step7: ピクセルが [0, 1] の範囲にあることを確認します。
Step8: データ増強
tf.keras.layers.RandomFlip や tf.keras.layers.RandomRotation などの Keras 前処理レイヤーをデータ拡張に使用することができます。
前処理レイヤーをいくつか作成し、同じ画像に繰り返して適用してみましょう。
Step9: データ拡張には、tf.keras.layers.RandomContrast、tf.keras.layers.RandomCrop、tf.keras.layers.RandomZoom など、様々な前処理レイヤーを使用できます。
Keras 前処理レイヤーを使用するための 2 つのオプション
これらの前処理レイヤーを使用できる方法には 2 つありませうが、これらには重要なトレードオフが伴います。
オプション 1
Step10: この場合、2 つの重要なポイントがあります。
データ増強はデバイス上で他のレイヤーと同期して実行されるため、GPU アクセラレーションの恩恵を受けることができます。
model.saveを使用してモデルをエクスポートすると、前処理レイヤーはモデルの残りの部分と一緒に保存されます。後でこのモデルをデプロイする場合、画像は自動的に(レイヤーの設定に従い)標準化されます。これにより、サーバーサイドでロジックを再実装する手間が省けます。
注意
Step11: このアプローチでは、Dataset.map を使用して、拡張画像のバッチを生成するデータセットを作成します。この場合は、
データ拡張は CPU 上で非同期に行われ、ノンブロッキングです。以下に示すように、Dataset.prefetch を使用して GPU 上でのモデルのトレーニングをデータの前処理にオーバーラップさせることができます。
この場合、Model.save を呼び出しても、前処理レイヤーはモデルと一緒にエクスポートされません。保存前にモデルに前処理レイヤーをアタッチするか、サーバー側で前処理レイヤーを再実装する必要があります。トレーニングの後、エクスポートする前に前処理レイヤーをアタッチすることができます。
1 番目のオプションの例については、画像分類チュートリアルをご覧ください。次に、2 番目のオプションを見てみましょう。
前処理レイヤーをデータセットに適用する
前に作成した前処理レイヤーを使用して、トレーニング、検証、テスト用のデータセットを構成します。また、パフォーマンス向上のために、並列読み取りとバッファ付きプリフェッチを使用してデータセットを構成し、I/O がブロックされることなくディスクからバッチを生成できるようにします。(データセットのパフォーマンスに関する詳細は、tf.data API によるパフォーマンス向上ガイドをご覧ください。)
注意
Step12: モデルをトレーニングする
完全を期すために、準備したデータセットを使用してモデルをトレーニングします。
Sequential モデルは、それぞれに最大プールレイヤー(tf.keras.layers.MaxPooling2D)を持つ3つの畳み込みブロック(tf.keras.layers.Conv2D)で構成されます。ReLU 活性化関数('relu')により活性化されたユニットが 128 個ある完全に接続されたレイヤー(tf.keras.layers.Dense)があります。このモデルの精度は調整されていません(このチュートリアルの目的は、標準的なアプローチを示すことであるため)。
Step13: tf.keras.optimizers.Adam オプティマイザとtf.keras.losses.SparseCategoricalCrossentropy 損失関数を選択します。各トレーニングエポックのトレーニングと検証の精度を表示するには、Model.compile に metrics 引数を渡します。
Step14: 数エポック、トレーニングします。
Step15: カスタムデータ増強
また、カスタムデータ拡張レイヤーを作成することもできます。
このセクションでは、これを行うための 2 つの方法を説明します。
まず、tf.keras.layers.Lambda レイヤーを作成します。簡潔なコードを書くには良い方法です。
次に、subclassing を介して新しいレイヤーを記述します。こうすることで、さらに制御できるようになります。
どちらのレイヤーも、確率に従って、画像の色をランダムに反転します。
Step16: 次に、サブクラス化してカスタムレイヤーを実装します。
Step17: どちらのレイヤーも、上記 1 と 2 のオプションで説明した使用が可能です。
tf.image を使用する
上記の Keras 前処理ユーティリティは便利ではありますが、より細かい制御には、tf.data や tf.image を使用して独自のデータ拡張パイプラインやレイヤーを書くことができます。(また、<a>TensorFlow Addons 画像
Step18: 作業に必要な画像を取得します。
Step19: 以下の関数を使用して元の画像と拡張画像を並べて視覚化し、比較してみましょう。
Step20: データ増強
画像をフリップする
tf.image.flip_left_right を使って、画像を縦方向または横方向に反転します。
Step21: 画像をグレースケールにする
tf.image.rgb_to_grayscale を使って、画像をグレースケールにできます。
Step22: 画像の彩度を処理する
tf.image.adjust_saturation を使用し、彩度係数を指定して画像の彩度を操作します。
Step23: 画像の明るさを変更する
tf.image.adjust_brightness を使用し、明度係数を指定して画像の明度を変更します。
Step24: 画像を中央でトリミングする
tf.image.central_crop を使用して、画像の中央から希望する部分までをトリミングします。
Step25: 画像を回転させる
tf.image.rot90 を使用して、画像を 90 度回転させます。
Step26: ランダム変換
警告
Step27: 画像のコントラストをランダムに変更する
tf.image.stateless_random_contrast を使用し、コントラスト範囲と seed を指定して、image のコントラストをランダムに変更します。コントラスト範囲は、[lower, upper] の間隔でランダムに選択され、指定された seed に関連付けられます。
Step28: ランダムに画像をトリミングする
tf.image.stateless_random_crop を使用し、ターゲットの size と seed を指定して image をランダムにトリミングします。image から切り取られる部分は、ランダムに選択されたオフセットにあり、指定された seed に関連付けられています。
Step29: データ増強をデータセットに適用する
前に説明したように、Dataset.map を使用してデータセットにデータ拡張を適用します。
Step30: 次に、画像のサイズ変更と再スケーリングのためのユーティリティ関数を定義します。この関数は、データセット内の画像のサイズとスケールを統一するために使用されます。
Step31: また、画像にランダム変換を適用できる augment 関数も定義します。この関数は、次のステップのデータセットで使用されます。
Step32: オプション 1
Step33: augment 関数をトレーニングデータセットにマッピングします。
Step34: オプション 2
Step35: ラッパー関数 f をトレーニングデータセットにマッピングし、resize_and_rescale 関数を検証セットとテストセットにマッピングします。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow.keras import layers
Explanation: データ増強
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/images/data_augmentation"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/images/data_augmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
概要
このチュートリアルでは、データ拡張を説明します。これは、画像の回転といったランダム(ただし現実的)な変換を適用することで、トレーニングセットの多様性を拡大する手法です。
データ拡張を次の 2 つの方法で適用する方法を学習します。
tf.keras.layers.Resizing、tf.keras.layers.Rescaling、tf.keras.layers.RandomFlip、および tf.keras.layers.RandomRotation などの Keras 前処理レイヤーを使用します。
tf.image.flip_left_right、tf.image.rgb_to_grayscale、tf.image.adjust_brightness、tf.image.central_crop、および tf.image.stateless_random* などの tf.image メソッドを使用します。
セットアップ
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: データセットをダウンロードする
このチュートリアルでは、tf_flowers データセットを使用します。便宜上、TensorFlow Dataset を使用してデータセットをダウンロードします。他のデータインポート方法に関する詳細は、画像読み込みのチュートリアルをご覧ください。
End of explanation
num_classes = metadata.features['label'].num_classes
print(num_classes)
Explanation: 花のデータセットには 5 つのクラスがあります。
End of explanation
get_label_name = metadata.features['label'].int2str
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: データセットから画像を取得し、それを使用してデータ増強を実演してみましょう。
End of explanation
IMG_SIZE = 180
resize_and_rescale = tf.keras.Sequential([
layers.Resizing(IMG_SIZE, IMG_SIZE),
layers.Rescaling(1./255)
])
Explanation: Keras 前処理レイヤーを使用する
リサイズとリスケール
Keras 前処理レイヤーを使用して、画像を一定の形状にサイズ変更し(tf.keras.layers.Resizing)、ピクセル値を再スケールする(tf.keras.layers.Rescaling)ことができます。
End of explanation
result = resize_and_rescale(image)
_ = plt.imshow(result)
Explanation: 注意: 上記のリスケーリングレイヤーは、ピクセル値を [0,1] の範囲に標準化します。代わりに [-1,1] を用いる場合には、tf.keras.layers.Rescaling(1./127.5, offset=-1) と記述します。
次のようにして、これらのレイヤーを画像に適用した結果を可視化します。
End of explanation
print("Min and max pixel values:", result.numpy().min(), result.numpy().max())
Explanation: ピクセルが [0, 1] の範囲にあることを確認します。
End of explanation
data_augmentation = tf.keras.Sequential([
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.2),
])
# Add the image to a batch.
image = tf.cast(tf.expand_dims(image, 0), tf.float32)
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = data_augmentation(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0])
plt.axis("off")
Explanation: データ増強
tf.keras.layers.RandomFlip や tf.keras.layers.RandomRotation などの Keras 前処理レイヤーをデータ拡張に使用することができます。
前処理レイヤーをいくつか作成し、同じ画像に繰り返して適用してみましょう。
End of explanation
model = tf.keras.Sequential([
# Add the preprocessing layers you created earlier.
resize_and_rescale,
data_augmentation,
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
# Rest of your model.
])
Explanation: データ拡張には、tf.keras.layers.RandomContrast、tf.keras.layers.RandomCrop、tf.keras.layers.RandomZoom など、様々な前処理レイヤーを使用できます。
Keras 前処理レイヤーを使用するための 2 つのオプション
これらの前処理レイヤーを使用できる方法には 2 つありませうが、これらには重要なトレードオフが伴います。
オプション 1: 前処理レイヤーをモデルの一部にする
End of explanation
aug_ds = train_ds.map(
lambda x, y: (resize_and_rescale(x, training=True), y))
Explanation: この場合、2 つの重要なポイントがあります。
データ増強はデバイス上で他のレイヤーと同期して実行されるため、GPU アクセラレーションの恩恵を受けることができます。
model.saveを使用してモデルをエクスポートすると、前処理レイヤーはモデルの残りの部分と一緒に保存されます。後でこのモデルをデプロイする場合、画像は自動的に(レイヤーの設定に従い)標準化されます。これにより、サーバーサイドでロジックを再実装する手間が省けます。
注意: データ拡張はテスト時には非アクティブなので、(Model.evaluate や Model.predict ではなく) Model.fit への呼び出し時にのみ、入力画像を拡張します。
オプション 2: 前処理レイヤーをデータセットに適用する
End of explanation
batch_size = 32
AUTOTUNE = tf.data.AUTOTUNE
def prepare(ds, shuffle=False, augment=False):
# Resize and rescale all datasets.
ds = ds.map(lambda x, y: (resize_and_rescale(x), y),
num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1000)
# Batch all datasets.
ds = ds.batch(batch_size)
# Use data augmentation only on the training set.
if augment:
ds = ds.map(lambda x, y: (data_augmentation(x, training=True), y),
num_parallel_calls=AUTOTUNE)
# Use buffered prefetching on all datasets.
return ds.prefetch(buffer_size=AUTOTUNE)
train_ds = prepare(train_ds, shuffle=True, augment=True)
val_ds = prepare(val_ds)
test_ds = prepare(test_ds)
Explanation: このアプローチでは、Dataset.map を使用して、拡張画像のバッチを生成するデータセットを作成します。この場合は、
データ拡張は CPU 上で非同期に行われ、ノンブロッキングです。以下に示すように、Dataset.prefetch を使用して GPU 上でのモデルのトレーニングをデータの前処理にオーバーラップさせることができます。
この場合、Model.save を呼び出しても、前処理レイヤーはモデルと一緒にエクスポートされません。保存前にモデルに前処理レイヤーをアタッチするか、サーバー側で前処理レイヤーを再実装する必要があります。トレーニングの後、エクスポートする前に前処理レイヤーをアタッチすることができます。
1 番目のオプションの例については、画像分類チュートリアルをご覧ください。次に、2 番目のオプションを見てみましょう。
前処理レイヤーをデータセットに適用する
前に作成した前処理レイヤーを使用して、トレーニング、検証、テスト用のデータセットを構成します。また、パフォーマンス向上のために、並列読み取りとバッファ付きプリフェッチを使用してデータセットを構成し、I/O がブロックされることなくディスクからバッチを生成できるようにします。(データセットのパフォーマンスに関する詳細は、tf.data API によるパフォーマンス向上ガイドをご覧ください。)
注意: データ拡張はトレーニングセットのみに適用されます。
End of explanation
model = tf.keras.Sequential([
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
Explanation: モデルをトレーニングする
完全を期すために、準備したデータセットを使用してモデルをトレーニングします。
Sequential モデルは、それぞれに最大プールレイヤー(tf.keras.layers.MaxPooling2D)を持つ3つの畳み込みブロック(tf.keras.layers.Conv2D)で構成されます。ReLU 活性化関数('relu')により活性化されたユニットが 128 個ある完全に接続されたレイヤー(tf.keras.layers.Dense)があります。このモデルの精度は調整されていません(このチュートリアルの目的は、標準的なアプローチを示すことであるため)。
End of explanation
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: tf.keras.optimizers.Adam オプティマイザとtf.keras.losses.SparseCategoricalCrossentropy 損失関数を選択します。各トレーニングエポックのトレーニングと検証の精度を表示するには、Model.compile に metrics 引数を渡します。
End of explanation
epochs=5
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs
)
loss, acc = model.evaluate(test_ds)
print("Accuracy", acc)
Explanation: 数エポック、トレーニングします。
End of explanation
def random_invert_img(x, p=0.5):
if tf.random.uniform([]) < p:
x = (255-x)
else:
x
return x
def random_invert(factor=0.5):
return layers.Lambda(lambda x: random_invert_img(x, factor))
random_invert = random_invert()
plt.figure(figsize=(10, 10))
for i in range(9):
augmented_image = random_invert(image)
ax = plt.subplot(3, 3, i + 1)
plt.imshow(augmented_image[0].numpy().astype("uint8"))
plt.axis("off")
Explanation: カスタムデータ増強
また、カスタムデータ拡張レイヤーを作成することもできます。
このセクションでは、これを行うための 2 つの方法を説明します。
まず、tf.keras.layers.Lambda レイヤーを作成します。簡潔なコードを書くには良い方法です。
次に、subclassing を介して新しいレイヤーを記述します。こうすることで、さらに制御できるようになります。
どちらのレイヤーも、確率に従って、画像の色をランダムに反転します。
End of explanation
class RandomInvert(layers.Layer):
def __init__(self, factor=0.5, **kwargs):
super().__init__(**kwargs)
self.factor = factor
def call(self, x):
return random_invert_img(x)
_ = plt.imshow(RandomInvert()(image)[0])
Explanation: 次に、サブクラス化してカスタムレイヤーを実装します。
End of explanation
(train_ds, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: どちらのレイヤーも、上記 1 と 2 のオプションで説明した使用が可能です。
tf.image を使用する
上記の Keras 前処理ユーティリティは便利ではありますが、より細かい制御には、tf.data や tf.image を使用して独自のデータ拡張パイプラインやレイヤーを書くことができます。(また、<a>TensorFlow Addons 画像: 演算</a>および TensorFlow I/O: 色空間の変換もご覧ください。)
花のデータセットは、前にデータ拡張で構成したので、再インポートして最初からやり直しましょう。
End of explanation
image, label = next(iter(train_ds))
_ = plt.imshow(image)
_ = plt.title(get_label_name(label))
Explanation: 作業に必要な画像を取得します。
End of explanation
def visualize(original, augmented):
fig = plt.figure()
plt.subplot(1,2,1)
plt.title('Original image')
plt.imshow(original)
plt.subplot(1,2,2)
plt.title('Augmented image')
plt.imshow(augmented)
Explanation: 以下の関数を使用して元の画像と拡張画像を並べて視覚化し、比較してみましょう。
End of explanation
flipped = tf.image.flip_left_right(image)
visualize(image, flipped)
Explanation: データ増強
画像をフリップする
tf.image.flip_left_right を使って、画像を縦方向または横方向に反転します。
End of explanation
grayscaled = tf.image.rgb_to_grayscale(image)
visualize(image, tf.squeeze(grayscaled))
_ = plt.colorbar()
Explanation: 画像をグレースケールにする
tf.image.rgb_to_grayscale を使って、画像をグレースケールにできます。
End of explanation
saturated = tf.image.adjust_saturation(image, 3)
visualize(image, saturated)
Explanation: 画像の彩度を処理する
tf.image.adjust_saturation を使用し、彩度係数を指定して画像の彩度を操作します。
End of explanation
bright = tf.image.adjust_brightness(image, 0.4)
visualize(image, bright)
Explanation: 画像の明るさを変更する
tf.image.adjust_brightness を使用し、明度係数を指定して画像の明度を変更します。
End of explanation
cropped = tf.image.central_crop(image, central_fraction=0.5)
visualize(image, cropped)
Explanation: 画像を中央でトリミングする
tf.image.central_crop を使用して、画像の中央から希望する部分までをトリミングします。
End of explanation
rotated = tf.image.rot90(image)
visualize(image, rotated)
Explanation: 画像を回転させる
tf.image.rot90 を使用して、画像を 90 度回転させます。
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_brightness = tf.image.stateless_random_brightness(
image, max_delta=0.95, seed=seed)
visualize(image, stateless_random_brightness)
Explanation: ランダム変換
警告: ランダム画像演算には tf.image.random* および tf.image.stateless_random* の 2 つのセットがあります。tf.image.random* 演算は、TF 1.x の古い RNG を使用するため、使用することは強くお勧めしません。代わりに、このチュートリアルで紹介したランダム画像演算を使用してください。詳細については、乱数の生成を参照してください。
画像にランダムな変換を適用すると、データセットの一般化と拡張にさらに役立ちます。現在の tf.image は、次の 8 つのランダム画像演算 (ops) を提供します。
tf.image.stateless_random_brightness
tf.image.stateless_random_contrast
tf.image.stateless_random_crop
tf.image.stateless_random_flip_left_right
tf.image.stateless_random_flip_up_down
tf.image.stateless_random_hue
tf.image.stateless_random_jpeg_quality
tf.image.stateless_random_saturation
これらのランダム画像演算は機能的であり、出力は入力にのみ依存します。これにより、高性能で決定論的な入力パイプラインで簡単に使用できるようになります。各ステップで seed 値を入力する必要があります。同じ seedを指定すると、呼び出された回数に関係なく、同じ結果が返されます。
注意: seed は、形状が (2,) の Tensor で、値は任意の整数です。
以降のセクションでは、次のことを行います。
ランダム画像演算を使用して画像を変換する例を見る。
ランダム変換をトレーニングデータセットに適用する方法を示す。
画像の明るさをランダムに変更する
tf.image.stateless_random_brightness を使用し、明度係数と seed を指定して、image の明度をランダムに変更します。明度係数は、[-max_delta, max_delta) の範囲でランダムに選択され、指定された seed に関連付けられます。
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_contrast = tf.image.stateless_random_contrast(
image, lower=0.1, upper=0.9, seed=seed)
visualize(image, stateless_random_contrast)
Explanation: 画像のコントラストをランダムに変更する
tf.image.stateless_random_contrast を使用し、コントラスト範囲と seed を指定して、image のコントラストをランダムに変更します。コントラスト範囲は、[lower, upper] の間隔でランダムに選択され、指定された seed に関連付けられます。
End of explanation
for i in range(3):
seed = (i, 0) # tuple of size (2,)
stateless_random_crop = tf.image.stateless_random_crop(
image, size=[210, 300, 3], seed=seed)
visualize(image, stateless_random_crop)
Explanation: ランダムに画像をトリミングする
tf.image.stateless_random_crop を使用し、ターゲットの size と seed を指定して image をランダムにトリミングします。image から切り取られる部分は、ランダムに選択されたオフセットにあり、指定された seed に関連付けられています。
End of explanation
(train_datasets, val_ds, test_ds), metadata = tfds.load(
'tf_flowers',
split=['train[:80%]', 'train[80%:90%]', 'train[90%:]'],
with_info=True,
as_supervised=True,
)
Explanation: データ増強をデータセットに適用する
前に説明したように、Dataset.map を使用してデータセットにデータ拡張を適用します。
End of explanation
def resize_and_rescale(image, label):
image = tf.cast(image, tf.float32)
image = tf.image.resize(image, [IMG_SIZE, IMG_SIZE])
image = (image / 255.0)
return image, label
Explanation: 次に、画像のサイズ変更と再スケーリングのためのユーティリティ関数を定義します。この関数は、データセット内の画像のサイズとスケールを統一するために使用されます。
End of explanation
def augment(image_label, seed):
image, label = image_label
image, label = resize_and_rescale(image, label)
image = tf.image.resize_with_crop_or_pad(image, IMG_SIZE + 6, IMG_SIZE + 6)
# Make a new seed.
new_seed = tf.random.experimental.stateless_split(seed, num=1)[0, :]
# Random crop back to the original size.
image = tf.image.stateless_random_crop(
image, size=[IMG_SIZE, IMG_SIZE, 3], seed=seed)
# Random brightness.
image = tf.image.stateless_random_brightness(
image, max_delta=0.5, seed=new_seed)
image = tf.clip_by_value(image, 0, 1)
return image, label
Explanation: また、画像にランダム変換を適用できる augment 関数も定義します。この関数は、次のステップのデータセットで使用されます。
End of explanation
# Create a `Counter` object and `Dataset.zip` it together with the training set.
counter = tf.data.experimental.Counter()
train_ds = tf.data.Dataset.zip((train_datasets, (counter, counter)))
Explanation: オプション 1: tf.data.experimental.Counte を使用する
tf.data.experimental.Counter オブジェクト (counter と呼ぶ) を作成し、(counter, counter) を含むデータセットを zip します。これにより、データセット内の各画像が、counter に基づいて(一意の値形状 (2,))に関連付けられるようになります。これは後で、ランダム変換の seed 値として augment 関数に渡されます。
End of explanation
train_ds = (
train_ds
.shuffle(1000)
.map(augment, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
Explanation: augment 関数をトレーニングデータセットにマッピングします。
End of explanation
# Create a generator.
rng = tf.random.Generator.from_seed(123, alg='philox')
# Create a wrapper function for updating seeds.
def f(x, y):
seed = rng.make_seeds(2)[0]
image, label = augment((x, y), seed)
return image, label
Explanation: オプション 2: tf.random.Generator を使用する
seed の初期値で tf.random.Generator オブジェクトを作成します。同じジェネレータオブジェクトに make_seeds 関数を呼び出すと、必ず新しい一意の seed 値が返されます。
ラッパー関数を 1) make_seeds 関数を呼び出し、2) 新たに生成された seed 値を augment 関数に渡してランダム変換を行うように定義します。
注意: tf.random.Generator オブジェクトは RNG 状態を tf.Variable に格納するため、checkpoint または SavedModel として保存できます。詳細につきましては、乱数の生成を参照してください。
End of explanation
train_ds = (
train_datasets
.shuffle(1000)
.map(f, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(resize_and_rescale, num_parallel_calls=AUTOTUNE)
.batch(batch_size)
.prefetch(AUTOTUNE)
)
Explanation: ラッパー関数 f をトレーニングデータセットにマッピングし、resize_and_rescale 関数を検証セットとテストセットにマッピングします。
End of explanation |
13,970 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here
Step1: Problems
Step2: Problems
Step3: Problems
Step4: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines
Step5: Once we've fit the data, we need to do something useful with it. Try to do the following things
Step6: Model selection
Difficult to do with classic tools (here)
Problem | Python Code:
running_id = 0
output = [[0]]
with open("E:/output.txt") as file_open:
for row in file_open.read().split("\n"):
cols = row.split(",")
if cols[0] == output[-1][0]:
output[-1].append(cols[1])
output[-1].append(True)
else:
output.append(cols)
output = output[1:]
for row in output:
if len(row) == 6:
row += [datetime(2016, 5, 3, 20, 36, 8, 92165), False]
output = output[1:-1]
def convert_to_days(dt):
day_diff = dt / np.timedelta64(1, 'D')
if day_diff == 0:
return 23.0
else:
return day_diff
df = pd.DataFrame(output, columns=["id", "advert_time", "male","age","search","brand","conversion_time","event"])
df["lifetime"] = pd.to_datetime(df["conversion_time"]) - pd.to_datetime(df["advert_time"])
df["lifetime"] = df["lifetime"].apply(convert_to_days)
df["male"] = df["male"].astype(int)
df["search"] = df["search"].astype(int)
df["brand"] = df["brand"].astype(int)
df["age"] = df["age"].astype(int)
df["event"] = df["event"].astype(int)
df = df.drop('advert_time', 1)
df = df.drop('conversion_time', 1)
df = df.set_index("id")
df = df.dropna(thresh=2)
df.median()
###Parametric Bayes
#Shout out to Cam Davidson-Pilon
## Example fully worked model using toy data
## Adapted from http://blog.yhat.com/posts/estimating-user-lifetimes-with-pymc.html
## Note that we've made some corrections
N = 2500
##Generate some random data
lifetime = pm.rweibull( 2, 5, size = N )
birth = pm.runiform(0, 10, N)
censor = ((birth + lifetime) >= 10)
lifetime_ = lifetime.copy()
lifetime_[censor] = 10 - birth[censor]
alpha = pm.Uniform('alpha', 0, 20)
beta = pm.Uniform('beta', 0, 20)
@pm.observed
def survival(value=lifetime_, alpha = alpha, beta = beta ):
return sum( (1-censor)*(log( alpha/beta) + (alpha-1)*log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(50000, 30000)
pm.Matplot.plot(mcmc)
mcmc.trace("alpha")[:]
Explanation: The first step in any data analysis is acquiring and munging the data
Our starting data set can be found here:
http://jakecoltman.com in the pyData post
It is designed to be roughly similar to the output from DCM's path to conversion
Download the file and transform it into something with the columns:
id,lifetime,age,male,event,search,brand
where lifetime is the total time that we observed someone not convert for and event should be 1 if we see a conversion and 0 if we don't. Note that all values should be converted into ints
It is useful to note that end_date = datetime.datetime(2016, 5, 3, 20, 36, 8, 92165)
End of explanation
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000)
def weibull_median(alpha, beta):
return beta * ((log(2)) ** ( 1 / alpha))
plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
Explanation: Problems:
1 - Try to fit your data from section 1
2 - Use the results to plot the distribution of the median
Note that the media of a Weibull distribution is:
$$β(log 2)^{1/α}$$
End of explanation
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
alpha = pm.Uniform("alpha", 0,50)
beta = pm.Uniform("beta", 0,50)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 3000, thin = 20)
pm.Matplot.plot(mcmc)
#Solution to Q5
## Adjusting the priors impacts the overall result
## If we give a looser, less informative prior then we end up with a broader, shorter distribution
## If we give much more informative priors, then we get a tighter, taller distribution
censor = np.array(df["event"].apply(lambda x: 0 if x else 1).tolist())
## Note the narrowing of the prior
alpha = pm.Normal("alpha", 1.7, 10000)
beta = pm.Normal("beta", 18.5, 10000)
####Uncomment this to see the result of looser priors
## Note this ends up pretty much the same as we're already very loose
#alpha = pm.Uniform("alpha", 0, 30)
#beta = pm.Uniform("beta", 0, 30)
@pm.observed
def survival(value=df["lifetime"], alpha = alpha, beta = beta ):
return sum( (1-censor)*(np.log( alpha/beta) + (alpha-1)*np.log(value/beta)) - (value/beta)**(alpha))
mcmc = pm.MCMC([alpha, beta, survival ] )
mcmc.sample(10000, burn = 5000, thin = 20)
pm.Matplot.plot(mcmc)
#plt.hist([weibull_median(x[0], x[1]) for x in zip(mcmc.trace("alpha"), mcmc.trace("beta"))])
Explanation: Problems:
4 - Try adjusting the number of samples for burning and thinnning
5 - Try adjusting the prior and see how it affects the estimate
End of explanation
#### Hypothesis testing
Explanation: Problems:
7 - Try testing whether the median is greater than a different values
End of explanation
### Fit a cox proprtional hazards model
Explanation: If we want to look at covariates, we need a new approach.
We'll use Cox proprtional hazards, a very popular regression model.
To fit in python we use the module lifelines:
http://lifelines.readthedocs.io/en/latest/
End of explanation
#### Plot baseline hazard function
#### Predict
#### Plot survival functions for different covariates
#### Plot some odds
Explanation: Once we've fit the data, we need to do something useful with it. Try to do the following things:
1 - Plot the baseline survival function
2 - Predict the functions for a particular set of features
3 - Plot the survival function for two different set of features
4 - For your results in part 3 caculate how much more likely a death event is for one than the other for a given period of time
End of explanation
#### BMA Coefficient values
#### Different priors
Explanation: Model selection
Difficult to do with classic tools (here)
Problem:
1 - Calculate the BMA coefficient values
2 - Try running with different priors
End of explanation |
13,971 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Titanic kaggle competition with SVM
Step1: Let's load and examine the titanic data with pandas first.
Step2: So we have 891 training examples with 10 information columns given. Of course it is not straight forward to use all of them at this point.
In this example, we will just explore two simple SVM models that only use two features.
Our choice of models are
Step3: Here is how we examine how the selection works.
Step4: Gender-Age model
We will introduce a few concepts while including the Age feature.
missing values
identify missing values
Step5: How many missing values are there?
Step6: SVM does not allow features with missing values, what do we do?
One idea would be to fill in them with a number we think is reasonable.
Let's try to use the average age first.
Step7: can you think of better ways to do this?
Step8: feature rescaling
Step9: Let's examine the selection function of the model.
Step10: Create a submission file with the Gender-Age model
First we want to read in the test data set and add in the gender features as what we did with the training data set.
Step11: We notice again that some of the age value is missing in the test data, and want to fill in the same way as what we did with the training data.
Step12: Note here we give the missing values the mean age of the training data.
What's the pros and cons of doing this?
We want to get the features from the test data, and scale our age feature the same way as what we did in the training data.
Step13: We use the model above to predict the survive of our test data.
The model is fitted with the entire training data.
Step14: create a file that can be submit to kaggle
We read in the example submission file provided by kaggle, and then replace the "Survived" column with our own prediction.
We use the to_csv method of panda dataframe, now we can check with kaggle on how well we are doing. | Python Code:
#import all the needed package
import numpy as np
import scipy as sp
import re
import pandas as pd
import sklearn
from sklearn.cross_validation import train_test_split,cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn import metrics
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
from sklearn.svm import SVC
Explanation: Titanic kaggle competition with SVM
End of explanation
data = pd.read_csv('data/train.csv')
print data.head()
# our target is the survived column
y= data['Survived']
print data.shape
Explanation: Let's load and examine the titanic data with pandas first.
End of explanation
#add in Sex_male features
data['Sex_male']=data.Sex.map({'female':0,'male':1})
data.head()
#get the features we indented to use
feature_cols=['Pclass','Sex_male']
X=data[feature_cols]
X.head()
#use the default SVM rbf model
model=SVC()
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
print scores, np.mean(scores),np.std(scores)
Explanation: So we have 891 training examples with 10 information columns given. Of course it is not straight forward to use all of them at this point.
In this example, we will just explore two simple SVM models that only use two features.
Our choice of models are:
- gender, class
- gender, feature
Gender-Class model
Recall how we generated features from catagories last session. We use the same method to generate an additional feature called Sex_male.
End of explanation
xmin,xmax=X['Pclass'].min()-0.5,X['Pclass'].max()+0.5
ymin,ymax=X['Sex_male'].min()-0.5,X['Sex_male'].max()+0.5
print xmin,xmax,ymin,ymax
xx, yy = np.meshgrid(np.linspace(xmin, xmax, 200), np.linspace(ymin, ymax, 200))
model.fit(X,y)
Z = model.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
fig=plt.figure(figsize=(20,10))
ax=fig.add_subplot(111)
ax.pcolormesh(xx, yy, -Z, cmap=plt.cm.RdBu,alpha=0.5)
ax.scatter(X['Pclass']+np.random.randn(len(X['Pclass']))*0.1, X['Sex_male']+np.random.randn(len(X['Pclass']))*0.05, c=y,s=40, cmap=plt.cm.RdBu_r)
ax.set_xlabel("Pclass")
ax.set_ylabel("Sex_male")
ax.set_xlim([0.5,3.5])
ax.set_ylim([-0.5,1.5])
plt.show()
Explanation: Here is how we examine how the selection works.
End of explanation
#use the isnull function to check if there is any missing value in the Age column.
pd.isnull(data['Age']).any()
Explanation: Gender-Age model
We will introduce a few concepts while including the Age feature.
missing values
identify missing values
End of explanation
print len(data['Age'][pd.isnull(data['Age'])])
Explanation: How many missing values are there?
End of explanation
data['Age'][pd.isnull(data['Age'])]=data['Age'].mean()
Explanation: SVM does not allow features with missing values, what do we do?
One idea would be to fill in them with a number we think is reasonable.
Let's try to use the average age first.
End of explanation
#generate our new feature
feature_cols=['Age','Sex_male']
X=data[feature_cols]
X.head()
#use the default SVM rbf model
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
print scores, np.mean(scores),np.std(scores)
Explanation: can you think of better ways to do this?
End of explanation
X['Age']=(X['Age']-X['Age'].median())/X['Age'].std()
#X = StandardScaler().fit_transform(X)
scores=cross_val_score(model,X,y,cv=10,scoring='accuracy')
print scores, np.mean(scores),np.std(scores)
Explanation: feature rescaling
End of explanation
xmin,xmax=X['Age'].min()-0.5,X['Age'].max()+0.5
ymin,ymax=X['Sex_male'].min()-0.5,X['Sex_male'].max()+0.5
print xmin,xmax,ymin,ymax
xx, yy = np.meshgrid(np.linspace(xmin, xmax, 200), np.linspace(ymin, ymax, 200))
model.fit(X,y)
Z = model.decision_function(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
fig=plt.figure(figsize=(20,10))
ax=fig.add_subplot(111)
ax.pcolormesh(xx, yy, -Z, cmap=plt.cm.RdBu,alpha=0.5)
ax.scatter(X['Age'], X['Sex_male']+np.random.randn(len(X['Age']))*0.05, c=y,s=40, cmap=plt.cm.RdBu_r)
ax.set_xlabel("Normalized Age")
ax.set_ylabel("Sex_male")
ax.set_ylim([-0.5,1.5])
ax.set_xlim([-3,4.5])
plt.show()
Explanation: Let's examine the selection function of the model.
End of explanation
test_data = pd.read_csv('data/test.csv')
#print test_data.head()
#add in Sex_male features
test_data['Sex_male']=test_data.Sex.map({'female':0,'male':1})
Explanation: Create a submission file with the Gender-Age model
First we want to read in the test data set and add in the gender features as what we did with the training data set.
End of explanation
#use the isnull function to check if there is any missing value in the Age column.
pd.isnull(test_data['Age']).any()
print len(test_data['Age'][pd.isnull(test_data['Age'])])
test_data['Age'][pd.isnull(test_data['Age'])]=data['Age'].mean()
Explanation: We notice again that some of the age value is missing in the test data, and want to fill in the same way as what we did with the training data.
End of explanation
#generate our new feature
X_test=test_data[feature_cols]
X_test['Age']=(X_test['Age']-data['Age'].median())/data['Age'].std()
Explanation: Note here we give the missing values the mean age of the training data.
What's the pros and cons of doing this?
We want to get the features from the test data, and scale our age feature the same way as what we did in the training data.
End of explanation
y_pred=model.predict(X_test)
X_test.head()
Explanation: We use the model above to predict the survive of our test data.
The model is fitted with the entire training data.
End of explanation
samplesubmit = pd.read_csv("data/titanic_submit_example.csv")
#samplesubmit.head()
samplesubmit["Survived"]=y_pred
#samplesubmit.to_csv
samplesubmit.to_csv("data/titanic_submit_gender_age.csv",index=False)
samplesubmit.head()
Explanation: create a file that can be submit to kaggle
We read in the example submission file provided by kaggle, and then replace the "Survived" column with our own prediction.
We use the to_csv method of panda dataframe, now we can check with kaggle on how well we are doing.
End of explanation |
13,972 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Extracting SQL code from SSIS dtsx packages with Python lxml
Code for the blog post Extracting SQL code from SSIS dtsx packages with Python lxml
From Analyze the Data not the Drivel
Step1: Code reformatted to better display on blog
Step2: Original unformatted code | Python Code:
# imports
import os
from lxml import etree
# set sql output directory
sql_out = r"C:\temp\dtsxsql"
if not os.path.isdir(sql_out):
os.makedirs(sql_out)
# set dtsx package file
ssis_dtsx = r'C:\temp\dtsx\ParseXML.dtsx'
if not os.path.isfile(ssis_dtsx):
print("no package file")
# read and parse ssis package
tree = etree.parse(ssis_dtsx)
root = tree.getroot()
root.tag
# collect unique lxml transformed element tags
ele_tags = set()
for ele in root.xpath(".//*"):
ele_tags.add(ele.tag)
print(ele_tags)
print(len(ele_tags))
Explanation: Extracting SQL code from SSIS dtsx packages with Python lxml
Code for the blog post Extracting SQL code from SSIS dtsx packages with Python lxml
From Analyze the Data not the Drivel
End of explanation
pfx = '{www.microsoft.com/'
exe_tag = pfx + 'SqlServer/Dts}Executable'
obj_tag = pfx + 'SqlServer/Dts}ObjectName'
dat_tag = pfx + 'SqlServer/Dts}ObjectData'
tsk_tag = pfx + 'sqlserver/dts/tasks/sqltask}SqlTaskData'
src_tag = pfx + \
'sqlserver/dts/tasks/sqltask}SqlStatementSource'
print(exe_tag)
print(obj_tag)
print(tsk_tag)
print(src_tag)
# extract sql source statements and write to *.sql files
total_bytes = 0
package_name = root.attrib[obj_tag].replace(" ","")
for cnt, ele in enumerate(root.xpath(".//*")):
if ele.tag == exe_tag:
attr = ele.attrib
for child0 in ele:
if child0.tag == dat_tag:
for child1 in child0:
sql_comment = attr[obj_tag].strip()
if child1.tag == tsk_tag:
dtsx_sql = child1.attrib[src_tag]
dtsx_sql = "-- " + \
sql_comment + "\n" + dtsx_sql
sql_file = sql_out + "\\" \
+ package_name + str(cnt) + ".sql"
total_bytes += len(dtsx_sql)
print((len(dtsx_sql),
sql_comment, sql_file))
with open(sql_file, "w") as file:
file.write(dtsx_sql)
print(('total bytes',total_bytes))
Explanation: Code reformatted to better display on blog
End of explanation
# scan package tree and extract sql source code
total_bytes = 0
package_name = root.attrib['{www.microsoft.com/SqlServer/Dts}ObjectName'].replace(" ","")
for cnt, ele in enumerate(root.xpath(".//*")):
if ele.tag == "{www.microsoft.com/SqlServer/Dts}Executable":
attr = ele.attrib
for child0 in ele:
if child0.tag == "{www.microsoft.com/SqlServer/Dts}ObjectData":
for child1 in child0:
sql_comment = attr["{www.microsoft.com/SqlServer/Dts}ObjectName"].strip()
if child1.tag == "{www.microsoft.com/sqlserver/dts/tasks/sqltask}SqlTaskData":
dtsx_sql = child1.attrib["{www.microsoft.com/sqlserver/dts/tasks/sqltask}SqlStatementSource"]
dtsx_sql = "-- " + sql_comment + "\n" + dtsx_sql
sql_file = sql_out + "\\" + package_name + str(cnt) + ".sql"
total_bytes += len(dtsx_sql)
print((len(dtsx_sql), sql_comment, sql_file))
with open(sql_file, "w") as file:
file.write(dtsx_sql)
print(('total sql bytes',total_bytes))
Explanation: Original unformatted code
End of explanation |
13,973 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
아나콘다(Anaconda) 소개
수정 사항
파이썬3 이용 아나콘다 팩키지 설치 이미지 업데이트 필요
아나콘다 패키지 소개
파이썬 프로그래밍 언어 개발환경
파이썬 기본 패키지 이외에 데이터분석용 필수 패키지 포함
기본적으로 스파이더 에디터를 활용하여 강의 진행
아나콘다 패키지 다운로드
아나콘다 패키지를 다운로드 하려면 아래 사이트를 방문한다
https
Step1: 변수는 선언을 먼저 해야 사용할 수 있다.
Step2: 간단한 코드를 작성하여 실행할 수 있다.
Step3: 스파이더 에디터 창에서 파이썬 코드 작성 요령
탭완성 기능과 다양한 단축키 기능을 활용하여 매우 효율적인 코딩을 할 수 있다.
탭완성 기능
탭완성 기능은 편집기 및 터미널 모두에서 사용할 수 있다. | Python Code:
a = 2
b = 3
a + b
Explanation: 아나콘다(Anaconda) 소개
수정 사항
파이썬3 이용 아나콘다 팩키지 설치 이미지 업데이트 필요
아나콘다 패키지 소개
파이썬 프로그래밍 언어 개발환경
파이썬 기본 패키지 이외에 데이터분석용 필수 패키지 포함
기본적으로 스파이더 에디터를 활용하여 강의 진행
아나콘다 패키지 다운로드
아나콘다 패키지를 다운로드 하려면 아래 사이트를 방문한다
https://www.anaconda.com/download/
이후 아래 그림을 참조하여 다운받는다.
주의: 강의에서는 파이썬 3 최신 버전을 사용한다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/anaconda01.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda02.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="/images/anaconda03.PNG" style="height:60">
</td>
</tr>
</table>
</p>
아나콘다 패키지 설치
아래 그림에 표시된 부분에 주의하여 설치한다.
주의: 경로설정 부분은 무슨 의미인지 알고 있는 경우 체크 가능. 모르면 해제 권장.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/anaconda04.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda05.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda06.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda07.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda08.PNG" style="height:60">
</td>
</tr>
</table>
</p>
스파이더(Spyder) 파이썬 편집기 실행
윈도우키를 누른 후 스파이더(Spyder) 선택
방화벽 설정은 기본 사용
업데이트 확인 부분은 체크 해제
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/anaconda09.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda10.PNG" style="height=60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda11.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda12.PNG" style="height:60">
</td>
</tr>
</table>
</p>
스파이더(Spyder) 파이썬 편집기 활용
주의: 업그레이드 확인 부분은 체크 해제할 것. 업그레이드를 임의로 하지 말 것.
스파이더는 편집기 기능과 터미널 기능을 동시에 제공
편집기 부분은 긴 코드를 작성할 때 사용
터미널 부분은 짧은 코드를 테스트할 때 사용
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/anaconda13.PNG" style="height:60">
</td>
</tr>
</table>
</p>
실행버튼을 처음으로 눌렀을 경우 파이썬 해석기와 관련된 설정창이 뜬다.
설정을 굳이 변경할 필요 없이 Run 버튼을 누른다.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/anaconda14.PNG" style="height:60">
</td>
</tr>
</table>
</p>
스파이더(Spyder) 파이썬 편집기 활용 예제
편집기 부분과 터미널 부분은 파이썬 해석기를 공유한다.
편집기 부분에 코드 입력 후 실행하면, 터미널 부분에서도 편집기 부분에서 정의된 변수, 함수 등 사용 가능
또한 터미널 부분에서 정의된 변수, 함수 등도 편집기에서 사용 가능.
주의: 이 방식은 추천하지 않음. 편집기 부분을 저장할 때 터미널 부분에 정의된 코드는 저장되지 않음.
<p>
<table cellspacing="20">
<tr>
<td>
<img src="images/anaconda15.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda16.PNG" style="height:60">
</td>
</tr>
<tr>
<td>
<img src="images/anaconda17.PNG" style="height:60">
</td>
</tr>
</table>
</p>
스파이더 터미널 창에서 파이썬 코드 실행하기
아래와 같이 명령을 바로 실행하여 결과를 확인할 수 있다.
End of explanation
a_number * 2
a_number = 5
type(a_number)
print(a_number)
a_number * 2
Explanation: 변수는 선언을 먼저 해야 사용할 수 있다.
End of explanation
if a_number > 2:
print('Greater than 2!')
else:
print('Not greater than 2!')
Explanation: 간단한 코드를 작성하여 실행할 수 있다.
End of explanation
# 먼저 `a_`까지만 작성한 후에 탭키를 누르면 나머지가 자동으로 완성된다.
a_number
Explanation: 스파이더 에디터 창에서 파이썬 코드 작성 요령
탭완성 기능과 다양한 단축키 기능을 활용하여 매우 효율적인 코딩을 할 수 있다.
탭완성 기능
탭완성 기능은 편집기 및 터미널 모두에서 사용할 수 있다.
End of explanation |
13,974 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remote DCOM IErtUtil DLL Hijack
Metadata
| | |
|
Step1: Download & Process Mordor Dataset
Step2: Analytic I
Look for non-system accounts SMB accessing a C
Step3: Analytic II
Look for C
Step4: Analytic III
Look for C
Step5: Analytic IV
Look for C | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Remote DCOM IErtUtil DLL Hijack
Metadata
| | |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2020/10/09 |
| modification date | 2020/10/09 |
| playbook related | ['WIN-201012004336'] |
Hypothesis
Threat actors might be copying files remotely to abuse a DLL hijack opportunity found on the DCOM InternetExplorer.Application Class.
Technical Context
Offensive Tradecraft
A threat actor could use a known DLL hijack vulnerability on the DCOM InternetExplorer.Application Class while instantiating the object remotely.
When the object instantiate, it looks for iertutil.dll in the c:\Program Files\Internet Explorer\ directory. That DLL does not exist in that folder. Therefore, a threat actor could easily copy its own DLL in that folder and execute it by instantiating an object via the DCOM InternetExplorer.Application Class remotely.
When the malicious DLL is loaded, there are various approaches to hijacking execution, but most likely a threat actor would want the DLL to act as a proxy to the real DLL to minimize the chances of interrupting normal operations.
One way to do this is by cloning the export table from one DLL to another one. One known tool that can help with it is Koppeling.
Mordor Test Data
| | |
|:----------|:----------|
| metadata | https://mordordatasets.com/notebooks/small/windows/08_lateral_movement/SDWIN-201009183000.html |
| link | https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/covenant_dcom_iertutil_dll_hijack.zip |
Analytics
Initialize Analytics Engine
End of explanation
mordor_file = "https://raw.githubusercontent.com/OTRF/mordor/master/datasets/small/windows/lateral_movement/host/covenant_dcom_iertutil_dll_hijack.zip"
registerMordorSQLTable(spark, mordor_file, "mordorTable")
Explanation: Download & Process Mordor Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable
WHERE LOWER(Channel) = "security"
AND EventID = 5145
AND RelativeTargetName LIKE '%Internet Explorer\\\iertutil.dll'
AND NOT SubjectUserName LIKE '%$'
AND AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic I
Look for non-system accounts SMB accessing a C:\Program Files\Internet Explorer\iertutil.dll with write (0x2) access mask via an administrative share (i.e C$).
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable b
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND Image = 'System'
AND EventID = 11
AND TargetFilename LIKE '%Internet Explorer\\\iertutil.dll'
) a
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = a.TargetFilename
WHERE LOWER(b.Channel) = 'security'
AND b.EventID = 5145
AND b.AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic II
Look for C:\Program Files\Internet Explorer\iertutil.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$) and created by the System process on the target system.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable b
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND Image = 'System'
AND EventID = 11
AND TargetFilename LIKE '%Internet Explorer\\\iertutil.dll'
) a
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = a.TargetFilename
WHERE LOWER(b.Channel) = 'security'
AND b.EventID = 5145
AND b.AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic III
Look for C:\Program Files\Internet Explorer\iertutil.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$) and created by the System process on the target system.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, ShareName, SubjectUserName, SubjectLogonId, IpAddress, IpPort, RelativeTargetName
FROM mordorTable d
INNER JOIN (
SELECT LOWER(REVERSE(SPLIT(TargetFilename, '\'))[0]) as TargetFilename
FROM mordorTable b
INNER JOIN (
SELECT ImageLoaded
FROM mordorTable
WHERE Channel = 'Microsoft-Windows-Sysmon/Operational'
AND EventID = 7
AND LOWER(Image) LIKE '%iexplore.exe'
AND ImageLoaded LIKE '%Internet Explorer\\\iertutil.dll'
) a
ON b.TargetFilename = a.ImageLoaded
WHERE b.Channel = 'Microsoft-Windows-Sysmon/Operational'
AND b.Image = 'System'
AND b.EventID = 11
) c
ON LOWER(REVERSE(SPLIT(RelativeTargetName, '\'))[0]) = c.TargetFilename
WHERE LOWER(d.Channel) = 'security'
AND d.EventID = 5145
AND d.AccessMask = '0x2'
'''
)
df.show(10,False)
Explanation: Analytic IV
Look for C:\Program Files\Internet Explorer\iertutil.dll being accessed over the network with write (0x2) access mask via an administrative share (i.e C$), created by the System process and loaded by the WMI provider host (wmiprvse.exe). All happening on the target system.
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| File | Microsoft-Windows-Security-Auditing | User accessed File | 5145 |
| File | Microsoft-Windows-Sysmon/Operational | Process created File | 11 |
| File | Microsoft-Windows-Sysmon/Operational | Process loaded Dll | 7 |
End of explanation |
13,975 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Evaluating Classifiers
Goals
Step1: The naive bayes algorithm gets 79.5% accuracy.
Does this seem like a good way to check the accuracy? It shouldn't! We tested our accuracy on the same data we used to fit our model. This is what is known as testing on training data and it's a cardinal sin in machine learning.
Lets try splitting our data. We'll train a model on the first 6000 tweets and then test it on the remaining 3092 tweets.
Step2: Overfitting
Our accuracy measurement went down a lot - from 79% to 66%.
Two important questions to ask yourself
Step3: This model goes through every point but wouldn't generalize well to other points.
A model like that does well on the training data but doesn't generalize is doing something known as overfitting.
Overfitting is an incredibly common problem in machine learning.
Test/Train Splits
We held out around 30% of our tweets to test on. But we only have around 9000 tweets. Two questions to ask yourself
Step4: It looks like our accuracy is closer to 65%. Do you think this is good or bad?
Baselines
On some tasks, like predicting if the stock market will go up tomorrow or whether a roulette wheel will come up black on the next spin, a 65% accurate model might be incredibly effective and make us rich. On other tasks like predicting if there will be an earthquake tomorrow, a 65% accurate model could be embarrasingly bad.
A very important thing to consider when we evaluate the performance of our model is how well a very simple model would do. The simplest model would just guess randomly "No Emotion", "Positive", "Negative" or "Can't Tell" and have 25% accuracy. A slightly better model would always guess the most common sentiment.
We can use scikit-learns dummy classifiers as baselines to compare against. | Python Code:
import pandas as pd
import numpy as np
df = pd.read_csv('../scikit/tweets.csv')
text = df['tweet_text']
target = df['is_there_an_emotion_directed_at_a_brand_or_product']
# Remove the blank rows:
fixed_target = target[pd.notnull(text)]
fixed_text = text[pd.notnull(text)]
# Perform feature extraction:
from sklearn.feature_extraction.text import CountVectorizer
count_vect = CountVectorizer()
count_vect.fit(fixed_text)
counts = count_vect.transform(fixed_text)
# Train with this data with a Naive Bayes classifier:
from sklearn.naive_bayes import MultinomialNB
nb = MultinomialNB()
nb.fit(counts, fixed_target)
predictions = nb.predict(counts) # predictions is a list of predictions
correct_predictions = sum(predictions == fixed_target) # correct predictions is a count
print('Percent correct: ', 100.0 * correct_predictions / len(predictions))
Explanation: Evaluating Classifiers
Goals:
1. How to evaluate a machine learning model
2. Understand overfitting
3. Understand cross-validation
Introduction
Machine learning is different than other types of engineering in that every improvement you make will typically make your model better in some cases and worse in other cases. It's easy to make choices that seem like they should improve performance but are actually hurting things. It's very important to not just look at individual examples, but to measure the overall performance of your algorithm at every step.
Before we make any modifications to the classifier we built, we need to put in place a framework for measuring its accuracy.
First Attempt
We have 9092 labeled records. We can try running our algorithm on that data and seeing how many it can correctly predict.
End of explanation
# (Tweets 0 to 5999 are used for training data)
nb.fit(counts[0:6000], fixed_target[0:6000])
# See what the classifier predicts for some new tweets:
# (Tweets 6000 to 9091 are used for testing)
predictions = nb.predict(counts[6000:9092])
print(len(predictions))
correct_predictions = sum(predictions == fixed_target[6000:9092])
print('Percent correct: ', 100.0 * correct_predictions / 3092)
Explanation: The naive bayes algorithm gets 79.5% accuracy.
Does this seem like a good way to check the accuracy? It shouldn't! We tested our accuracy on the same data we used to fit our model. This is what is known as testing on training data and it's a cardinal sin in machine learning.
Lets try splitting our data. We'll train a model on the first 6000 tweets and then test it on the remaining 3092 tweets.
End of explanation
import numpy as np
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
from sklearn import datasets
import matplotlib.pyplot as plt
from matplotlib import gridspec
%matplotlib inline
boston_houses = datasets.load_boston() # Load the boston housing price dataset
avg_rooms = boston_houses.data[:20, np.newaxis, 5] # load the average number of rooms for the first 20 records
price = boston_houses.target[:20] # load the price for the first 20 records
# initialize a model
model = GaussianProcessRegressor(
kernel=C(1.0, (1e-3, 1e3)) * RBF(10, (1e-2, 1e2)),
normalize_y=True)
# fit the model
model.fit(avg_rooms, price)
X = np.linspace(min(avg_rooms), max(avg_rooms), 1000).reshape(1000,1)
preds = model.predict(X)
plt.scatter(avg_rooms, price, color='black')
plt.plot(X, preds)
Explanation: Overfitting
Our accuracy measurement went down a lot - from 79% to 66%.
Two important questions to ask yourself:
1. Why is testing on training data likely to overestimate accuracy?
2. Why didn't our algorithm get 100% accuracy?
We just checked the accuracy of our algorithm on the data it was trained on. The algorithm could have memorized every tweet and just spit back what it memorized and gotten 100% accuracy.
For example, here is a model that has high accuracy on the training data and doesn't generalize well.
End of explanation
from sklearn.model_selection import cross_val_score
# we pass in model, feature-vector, sentiment-labels and set the number of folds to 10
scores = cross_val_score(nb, counts, fixed_target, cv=10)
print("Accuracy", scores)
print("Average Accuracy", scores.mean())
Explanation: This model goes through every point but wouldn't generalize well to other points.
A model like that does well on the training data but doesn't generalize is doing something known as overfitting.
Overfitting is an incredibly common problem in machine learning.
Test/Train Splits
We held out around 30% of our tweets to test on. But we only have around 9000 tweets. Two questions to ask yourself:
Why might we get unreliable accuracy measurements if we held out 90% of our data as test data?
Why might we get unreliable accuracy measurements if we held out only 1% of our data as test data?
Pause for a second and think about this before reading on. Test your understanding of the code by trying these experiments.
If our held out testing set is too big our model doesn't have enough data to train on, so it will probably perform worse.
If our held out testing set is too small the measurement will be too noisy - by chance we might get a lot right or a lot wrong. A 70/30 test train split for smaller data sets is common. As data sets get bigger it's ok to hold out less data as a percentage.
Cross Validation
The best way to efficiently use all of our data to measure a model's accuracy is a technique called cross validation. The way it works is we randomly shuffle our data and then divide it into say 5 equally sized chunks which we call folds. We hold the first fold out and train on the other folds. We measure the accuracy of our model on the first fold. Then we hold out the second fold and train an entirely new model and measure its accuracy on the second fold. We hold out each fold and check its accuracy and then we average all of the accuracies together.
I find it easier to understand with a diagram.
<img src="images/cross-validation.png" width="600"/>
It's easy to do this in code with the scikit-learn library.
End of explanation
# Train with this data with a dummy classifier:
from sklearn.dummy import DummyClassifier
nb = DummyClassifier(strategy='most_frequent')
from sklearn.model_selection import cross_val_score
scores = cross_val_score(nb, counts, fixed_target, cv=10)
print(scores)
print(scores.mean())
Explanation: It looks like our accuracy is closer to 65%. Do you think this is good or bad?
Baselines
On some tasks, like predicting if the stock market will go up tomorrow or whether a roulette wheel will come up black on the next spin, a 65% accurate model might be incredibly effective and make us rich. On other tasks like predicting if there will be an earthquake tomorrow, a 65% accurate model could be embarrasingly bad.
A very important thing to consider when we evaluate the performance of our model is how well a very simple model would do. The simplest model would just guess randomly "No Emotion", "Positive", "Negative" or "Can't Tell" and have 25% accuracy. A slightly better model would always guess the most common sentiment.
We can use scikit-learns dummy classifiers as baselines to compare against.
End of explanation |
13,976 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Import the necessary packages to read in the data, plot, and create a linear regression model
Step1: 2. Read in the hanford.csv file
Step2: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
Step3: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
Step4: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
Step5: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
Step6: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10 | Python Code:
import pandas as pd
%matplotlib inline
import matplotlib.pyplot as plt # package for doing plotting (necessary for adding the line)
import statsmodels.formula.api as smf
Explanation: 1. Import the necessary packages to read in the data, plot, and create a linear regression model
End of explanation
cd C:\Users\Harsha Devulapalli\Desktop\algorithms\class6
df=pd.read_csv("data/hanford.csv")
Explanation: 2. Read in the hanford.csv file
End of explanation
df.describe()
Explanation: <img src="images/hanford_variables.png">
3. Calculate the basic descriptive statistics on the data
End of explanation
df.corr()
df.plot(kind='scatter',x='Exposure',y='Mortality')
Explanation: 4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
End of explanation
lm = smf.ols(formula="Mortality~Exposure",data=df).fit()
lm.params
intercept, slope = lm.params
Explanation: 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
End of explanation
df.plot(kind="scatter",x="Exposure",y="Mortality")
plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red")
r = df.corr()['Exposure']['Mortality']
r*r
Explanation: 6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
End of explanation
def predictor(exposure):
return intercept+float(exposure)*slope
predictor(10)
Explanation: 7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
End of explanation |
13,977 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2021 The TensorFlow Authors.
Step1: Migrate metrics and optimizers
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: and prepare some simple data for demonstration
Step3: TF1
Step4: Also, metrics could be added to estimator directly via tf.estimator.add_metrics().
Step5: TF2
Step6: With eager execution enabled, tf.keras.metrics.Metric instances can be directly used to evaluate numpy data or eager tensors. tf.keras.metrics.Metric objects are stateful containers. The metric value can be updated via metric.update_state(y_true, y_pred), and the result can be retrieved by metrics.result(). | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2021 The TensorFlow Authors.
End of explanation
import tensorflow as tf
import tensorflow.compat.v1 as tf1
Explanation: Migrate metrics and optimizers
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate/metrics_optimizers">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/metrics_optimizers.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/guide/migrate/metrics_optimizers.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/guide/migrate/metrics_optimizers.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In TF1, tf.metrics is the API namespace for all the metric functions. Each of the metrics is a function that takes label and prediction as input parameters and returns the corresponding metrics tensor as result. In TF2, tf.keras.metrics contains all the metric functions and objects. The Metric object can be used with tf.keras.Model and tf.keras.layers.layer to calculate metric values.
Setup
Let's start with a couple of necessary TensorFlow imports,
End of explanation
features = [[1., 1.5], [2., 2.5], [3., 3.5]]
labels = [0, 0, 1]
eval_features = [[4., 4.5], [5., 5.5], [6., 6.5]]
eval_labels = [0, 1, 1]
Explanation: and prepare some simple data for demonstration:
End of explanation
def _input_fn():
return tf1.data.Dataset.from_tensor_slices((features, labels)).batch(1)
def _eval_input_fn():
return tf1.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
def _model_fn(features, labels, mode):
logits = tf1.layers.Dense(2)(features)
predictions = tf.math.argmax(input=logits, axis=1)
loss = tf1.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
optimizer = tf1.train.AdagradOptimizer(0.05)
train_op = optimizer.minimize(loss, global_step=tf1.train.get_global_step())
accuracy = tf1.metrics.accuracy(labels=labels, predictions=predictions)
return tf1.estimator.EstimatorSpec(mode,
predictions=predictions,
loss=loss,
train_op=train_op,
eval_metric_ops={'accuracy': accuracy})
estimator = tf1.estimator.Estimator(model_fn=_model_fn)
estimator.train(_input_fn)
estimator.evaluate(_eval_input_fn)
Explanation: TF1: tf.compat.v1.metrics with Estimator
In TF1, the metrics can be added to EstimatorSpec as the eval_metric_ops, and the op is generated via all the metrics functions defined in tf.metrics. You can follow the example to see how to use tf.metrics.accuracy.
End of explanation
def mean_squared_error(labels, predictions):
labels = tf.cast(labels, predictions.dtype)
return {"mean_squared_error":
tf1.metrics.mean_squared_error(labels=labels, predictions=predictions)}
estimator = tf1.estimator.add_metrics(estimator, mean_squared_error)
estimator.evaluate(_eval_input_fn)
Explanation: Also, metrics could be added to estimator directly via tf.estimator.add_metrics().
End of explanation
dataset = tf.data.Dataset.from_tensor_slices((features, labels)).batch(1)
eval_dataset = tf.data.Dataset.from_tensor_slices(
(eval_features, eval_labels)).batch(1)
inputs = tf.keras.Input((2,))
logits = tf.keras.layers.Dense(2)(inputs)
predictions = tf.math.argmax(input=logits, axis=1)
model = tf.keras.models.Model(inputs, predictions)
optimizer = tf.keras.optimizers.Adagrad(learning_rate=0.05)
model.compile(optimizer, loss='mse', metrics=[tf.keras.metrics.Accuracy()])
model.evaluate(eval_dataset, return_dict=True)
Explanation: TF2: Keras Metrics API with tf.keras.Model
In TF2, tf.keras.metrics contains all the metrics classes and functions. They are designed in a OOP style and integrate closely with other tf.keras API. All the metrics can be found in tf.keras.metrics namespace, and there is usually a direct mapping between tf.compat.v1.metrics with tf.keras.metrics.
In the following example, the metrics are added in model.compile() method. Users only need to create the metric instance, without specifying the label and prediction tensor. The Keras model will route the model output and label to the metrics object.
End of explanation
accuracy = tf.keras.metrics.Accuracy()
accuracy.update_state(y_true=[0, 0, 1, 1], y_pred=[0, 0, 0, 1])
accuracy.result().numpy()
accuracy.update_state(y_true=[0, 0, 1, 1], y_pred=[0, 0, 0, 0])
accuracy.update_state(y_true=[0, 0, 1, 1], y_pred=[1, 1, 0, 0])
accuracy.result().numpy()
Explanation: With eager execution enabled, tf.keras.metrics.Metric instances can be directly used to evaluate numpy data or eager tensors. tf.keras.metrics.Metric objects are stateful containers. The metric value can be updated via metric.update_state(y_true, y_pred), and the result can be retrieved by metrics.result().
End of explanation |
13,978 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Outline
Why vectorisation
Vectorisation examples
Scalar class - recap
Class with vectorised weights
Class with vectorised weights and inputs
Exercise
Setup
Step1: Testing vectorisation
Step2: Generate data
Step3: Multi class classification
Step4: Scalar Version
Step5: Weight Vectorised Version
Step6: Input + Weight Vectorised Version | Python Code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, mean_squared_error, log_loss
from tqdm import tqdm_notebook
import seaborn as sns
import imageio
import time
from IPython.display import HTML
from sklearn.preprocessing import OneHotEncoder
from sklearn.datasets import make_blobs
my_cmap = matplotlib.colors.LinearSegmentedColormap.from_list("", ["red","yellow","green"])
np.random.seed(0)
Explanation: Outline
Why vectorisation
Vectorisation examples
Scalar class - recap
Class with vectorised weights
Class with vectorised weights and inputs
Exercise
Setup
End of explanation
N = 100
M = 200
a = np.random.randn(N, M)
b = np.random.randn(N, M)
c = np.zeros((N, M))
%%time
for i in range(N):
for j in range(M):
c[i, j] = a[i, j] + b[i, j]
%%time
c = a + b
%%time
for i in range(N):
for j in range(M):
c[i, j] = np.sin(a[i, j] + 1)
%%time
c = np.sin(a + 1)
Explanation: Testing vectorisation
End of explanation
data, labels = make_blobs(n_samples=1000, centers=4, n_features=2, random_state=0)
print(data.shape, labels.shape)
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
labels_orig = labels
labels = np.mod(labels_orig, 2)
plt.scatter(data[:,0], data[:,1], c=labels, cmap=my_cmap)
plt.show()
Explanation: Generate data
End of explanation
X_train, X_val, Y_train, Y_val = train_test_split(data, labels_orig, stratify=labels_orig, random_state=0)
print(X_train.shape, X_val.shape, labels_orig.shape)
enc = OneHotEncoder()
# 0 -> (1, 0, 0, 0), 1 -> (0, 1, 0, 0), 2 -> (0, 0, 1, 0), 3 -> (0, 0, 0, 1)
y_OH_train = enc.fit_transform(np.expand_dims(Y_train,1)).toarray()
y_OH_val = enc.fit_transform(np.expand_dims(Y_val,1)).toarray()
print(y_OH_train.shape, y_OH_val.shape)
W1 = np.random.randn(2,2)
W2 = np.random.randn(2,4)
print(W1)
print(W2)
Explanation: Multi class classification
End of explanation
class FF_MultiClass_Scalar:
def __init__(self, W1, W2):
self.w1 = W1[0][0].copy()
self.w2 = W1[1][0].copy()
self.w3 = W1[0][1].copy()
self.w4 = W1[1][1].copy()
self.w5 = W2[0][0].copy()
self.w6 = W2[1][0].copy()
self.w7 = W2[0][1].copy()
self.w8 = W2[1][1].copy()
self.w9 = W2[0][2].copy()
self.w10 = W2[1][2].copy()
self.w11 = W2[0][3].copy()
self.w12 = W2[1][3].copy()
self.b1 = 0
self.b2 = 0
self.b3 = 0
self.b4 = 0
self.b5 = 0
self.b6 = 0
def sigmoid(self, x):
return 1.0/(1.0 + np.exp(-x))
def forward_pass(self, x):
# input layer
self.x1, self.x2 = x
# hidden layer
self.a1 = self.w1*self.x1 + self.w2*self.x2 + self.b1
self.h1 = self.sigmoid(self.a1)
self.a2 = self.w3*self.x1 + self.w4*self.x2 + self.b2
self.h2 = self.sigmoid(self.a2)
# output layer
self.a3 = self.w5*self.h1 + self.w6*self.h2 + self.b3
self.a4 = self.w7*self.h1 + self.w8*self.h2 + self.b4
self.a5 = self.w9*self.h1 + self.w10*self.h2 + self.b5
self.a6 = self.w11*self.h1 + self.w12*self.h2 + self.b5
sum_exps = np.sum([np.exp(self.a3), np.exp(self.a4), np.exp(self.a5), np.exp(self.a6)])
self.h3 = np.exp(self.a3)/sum_exps
self.h4 = np.exp(self.a4)/sum_exps
self.h5 = np.exp(self.a5)/sum_exps
self.h6 = np.exp(self.a6)/sum_exps
return np.array([self.h3, self.h4, self.h5, self.h6])
def grad(self, x, y):
self.forward_pass(x)
self.y1, self.y2, self.y3, self.y4 = y
self.da3 = (self.h3-self.y1)
self.da4 = (self.h4-self.y2)
self.da5 = (self.h5-self.y3)
self.da6 = (self.h6-self.y4)
self.dw5 = self.da3*self.h1
self.dw6 = self.da3*self.h2
self.db3 = self.da3
self.dw7 = self.da4*self.h1
self.dw8 = self.da4*self.h2
self.db4 = self.da4
self.dw9 = self.da5*self.h1
self.dw10 = self.da5*self.h2
self.db5 = self.da5
self.dw11 = self.da6*self.h1
self.dw12 = self.da6*self.h2
self.db6 = self.da6
self.dh1 = self.da3*self.w5 + self.da4*self.w7 + self.da5*self.w9 + self.da6*self.w11
self.dh2 = self.da3*self.w6 + self.da4*self.w8 + self.da5*self.w10 + self.da6*self.w12
self.da1 = self.dh1 * self.h1*(1-self.h1)
self.da2 = self.dh2 * self.h2*(1-self.h2)
self.dw1 = self.da1*self.x1
self.dw2 = self.da1*self.x2
self.db1 = self.da1
self.dw3 = self.da2*self.x1
self.dw4 = self.da2*self.x2
self.db2 = self.da2
def fit(self, X, Y, epochs=1, learning_rate=1, display_loss=False, display_weight=False):
if display_loss:
loss = {}
for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
dw1, dw2, dw3, dw4, dw5, dw6, dw7, dw8, dw9, dw10, dw11, dw12, db1, db2, db3, db4, db5, db6 = [0]*18
for x, y in zip(X, Y):
self.grad(x, y)
dw1 += self.dw1
dw2 += self.dw2
dw3 += self.dw3
dw4 += self.dw4
dw5 += self.dw5
dw6 += self.dw6
dw7 += self.dw7
dw8 += self.dw8
dw9 += self.dw9
dw10 += self.dw10
dw11 += self.dw11
dw12 += self.dw12
db1 += self.db1
db2 += self.db2
db3 += self.db3
db4 += self.db4
db2 += self.db5
db3 += self.db6
m = X.shape[0]
self.w1 -= (learning_rate * (dw1 / m))
self.w2 -= (learning_rate * (dw2 / m))
self.w3 -= (learning_rate * (dw3 / m))
self.w4 -= (learning_rate * (dw4 / m))
self.w5 -= (learning_rate * (dw5 / m))
self.w6 -= (learning_rate * (dw6 / m))
self.w7 -= (learning_rate * (dw7 / m))
self.w8 -= (learning_rate * (dw8 / m))
self.w9 -= (learning_rate * (dw9 / m))
self.w10 -= (learning_rate * (dw10 / m))
self.w11 -= (learning_rate * (dw11 / m))
self.w12 -= (learning_rate * (dw12 / m))
self.b1 -= (learning_rate * (db1 / m))
self.b2 -= (learning_rate * (db2 / m))
self.b3 -= (learning_rate * (db3 / m))
self.b4 -= (learning_rate * (db4 / m))
self.b5 -= (learning_rate * (db5 / m))
self.b6 -= (learning_rate * (db6 / m))
if display_loss:
Y_pred = self.predict(X)
loss[i] = log_loss(np.argmax(Y, axis=1), Y_pred)
if display_loss:
Wt1 = [[self.w1, self.w3], [self.w2, self.w4]]
Wt2 = [[self.w5, self.w6, self.w7, self.w8], [self.w9, self.w10, self.w11, self.w12]]
plt.plot(loss.values())
plt.xlabel('Epochs')
plt.ylabel('Log Loss')
plt.show()
def predict(self, X):
Y_pred = []
for x in X:
y_pred = self.forward_pass(x)
Y_pred.append(y_pred)
return np.array(Y_pred)
Explanation: Scalar Version
End of explanation
class FF_MultiClass_WeightVectorised:
def __init__(self, W1, W2):
self.W1 = W1.copy()
self.W2 = W2.copy()
self.B1 = np.zeros((1,2))
self.B2 = np.zeros((1,4))
def sigmoid(self, x):
return 1.0/(1.0 + np.exp(-x))
def softmax(self, x):
exps = np.exp(x)
return exps / np.sum(exps)
def forward_pass(self, x):
x = x.reshape(1, -1) # (1, 2)
self.A1 = np.matmul(x,self.W1) + self.B1 # (1, 2) * (2, 2) -> (1, 2)
self.H1 = self.sigmoid(self.A1) # (1, 2)
self.A2 = np.matmul(self.H1, self.W2) + self.B2 # (1, 2) * (2, 4) -> (1, 4)
self.H2 = self.softmax(self.A2) # (1, 4)
return self.H2
def grad_sigmoid(self, x):
return x*(1-x)
def grad(self, x, y):
self.forward_pass(x)
x = x.reshape(1, -1) # (1, 2)
y = y.reshape(1, -1) # (1, 4)
self.dA2 = self.H2 - y # (1, 4)
self.dW2 = np.matmul(self.H1.T, self.dA2) # (2, 1) * (1, 4) -> (2, 4)
self.dB2 = self.dA2 # (1, 4)
self.dH1 = np.matmul(self.dA2, self.W2.T) # (1, 4) * (4, 2) -> (1, 2)
self.dA1 = np.multiply(self.dH1, self.grad_sigmoid(self.H1)) # -> (1, 2)
self.dW1 = np.matmul(x.T, self.dA1) # (2, 1) * (1, 2) -> (2, 2)
self.dB1 = self.dA1 # (1, 2)
def fit(self, X, Y, epochs=1, learning_rate=1, display_loss=False):
if display_loss:
loss = {}
for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
dW1 = np.zeros((2,2))
dW2 = np.zeros((2,4))
dB1 = np.zeros((1,2))
dB2 = np.zeros((1,4))
for x, y in zip(X, Y):
self.grad(x, y)
dW1 += self.dW1
dW2 += self.dW2
dB1 += self.dB1
dB2 += self.dB2
m = X.shape[0]
self.W2 -= learning_rate * (dW2/m)
self.B2 -= learning_rate * (dB2/m)
self.W1 -= learning_rate * (dW1/m)
self.B1 -= learning_rate * (dB1/m)
if display_loss:
Y_pred = self.predict(X)
loss[i] = log_loss(np.argmax(Y, axis=1), Y_pred)
if display_loss:
plt.plot(loss.values())
plt.xlabel('Epochs')
plt.ylabel('Log Loss')
plt.show()
def predict(self, X):
Y_pred = []
for x in X:
y_pred = self.forward_pass(x)
Y_pred.append(y_pred)
return np.array(Y_pred).squeeze()
Explanation: Weight Vectorised Version
End of explanation
class FF_MultiClass_InputWeightVectorised:
def __init__(self, W1, W2):
self.W1 = W1.copy()
self.W2 = W2.copy()
self.B1 = np.zeros((1,2))
self.B2 = np.zeros((1,4))
def sigmoid(self, X):
return 1.0/(1.0 + np.exp(-X))
def softmax(self, X):
exps = np.exp(X)
return exps / np.sum(exps, axis=1).reshape(-1,1)
def forward_pass(self, X):
self.A1 = np.matmul(X,self.W1) + self.B1 # (N, 2) * (2, 2) -> (N, 2)
self.H1 = self.sigmoid(self.A1) # (N, 2)
self.A2 = np.matmul(self.H1, self.W2) + self.B2 # (N, 2) * (2, 4) -> (N, 4)
self.H2 = self.softmax(self.A2) # (N, 4)
return self.H2
def grad_sigmoid(self, X):
return X*(1-X)
def grad(self, X, Y):
self.forward_pass(X)
m = X.shape[0]
self.dA2 = self.H2 - Y # (N, 4) - (N, 4) -> (N, 4)
self.dW2 = np.matmul(self.H1.T, self.dA2) # (2, N) * (N, 4) -> (2, 4)
self.dB2 = np.sum(self.dA2, axis=0).reshape(1, -1) # (N, 4) -> (1, 4)
self.dH1 = np.matmul(self.dA2, self.W2.T) # (N, 4) * (4, 2) -> (N, 2)
self.dA1 = np.multiply(self.dH1, self.grad_sigmoid(self.H1)) # (N, 2) .* (N, 2) -> (N, 2)
self.dW1 = np.matmul(X.T, self.dA1) # (2, N) * (N, 2) -> (2, 2)
self.dB1 = np.sum(self.dA1, axis=0).reshape(1, -1) # (N, 2) -> (1, 2)
def fit(self, X, Y, epochs=1, learning_rate=1, display_loss=False):
if display_loss:
loss = {}
for i in tqdm_notebook(range(epochs), total=epochs, unit="epoch"):
self.grad(X, Y) # X -> (N, 2), Y -> (N, 4)
m = X.shape[0]
self.W2 -= learning_rate * (self.dW2/m)
self.B2 -= learning_rate * (self.dB2/m)
self.W1 -= learning_rate * (self.dW1/m)
self.B1 -= learning_rate * (self.dB1/m)
if display_loss:
Y_pred = self.predict(X)
loss[i] = log_loss(np.argmax(Y, axis=1), Y_pred)
if display_loss:
plt.plot(loss.values())
plt.xlabel('Epochs')
plt.ylabel('Log Loss')
plt.show()
def predict(self, X):
Y_pred = self.forward_pass(X)
return np.array(Y_pred).squeeze()
models_init = [FF_MultiClass_Scalar(W1, W2), FF_MultiClass_WeightVectorised(W1, W2),FF_MultiClass_InputWeightVectorised(W1, W2)]
models = []
for idx, model in enumerate(models_init, start=1):
tic = time.time()
ffsn_multi_specific = model
ffsn_multi_specific.fit(X_train,y_OH_train,epochs=2000,learning_rate=.5,display_loss=True)
models.append(ffsn_multi_specific)
toc = time.time()
print("Time taken by model {}: {}".format(idx, toc-tic))
for idx, model in enumerate(models, start=1):
Y_pred_train = model.predict(X_train)
Y_pred_train = np.argmax(Y_pred_train,1)
Y_pred_val = model.predict(X_val)
Y_pred_val = np.argmax(Y_pred_val,1)
accuracy_train = accuracy_score(Y_pred_train, Y_train)
accuracy_val = accuracy_score(Y_pred_val, Y_val)
print("Model {}".format(idx))
print("Training accuracy", round(accuracy_train, 2))
print("Validation accuracy", round(accuracy_val, 2))
plt.scatter(X_train[:,0], X_train[:,1], c=Y_pred_train, cmap=my_cmap, s=15*(np.abs(np.sign(Y_pred_train-Y_train))+.1))
plt.show()
Explanation: Input + Weight Vectorised Version
End of explanation |
13,979 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using LAMMPS with iPython and Jupyter
LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up.
Installation
Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on)
Compile LAMMPS as a shared library and enable PNG support
bash
cd $LAMMPS_DIR/src
make yes-molecule
python2 Make.py -m mpi -png -a file
make mode=shlib auto
Create a python virtualenv
bash
virtualenv testing
source testing/bin/activate
Inside the virtualenv install the lammps package
(testing) cd $LAMMPS_DIR/python
(testing) python install.py
(testing) cd # move to your working directory
Install jupyter and ipython in the virtualenv
bash
(testing) pip install ipython jupyter
Run jupyter notebook
bash
(testing) jupyter notebook
Example
Step1: Queries about LAMMPS simulation
Step2: Working with LAMMPS Variables
Step3: Accessing Atom data | Python Code:
from lammps import IPyLammps
L = IPyLammps()
# 2d circle of particles inside a box with LJ walls
import math
b = 0
x = 50
y = 20
d = 20
# careful not to slam into wall too hard
v = 0.3
w = 0.08
L.units("lj")
L.dimension(2)
L.atom_style("bond")
L.boundary("f f p")
L.lattice("hex", 0.85)
L.region("box", "block", 0, x, 0, y, -0.5, 0.5)
L.create_box(1, "box", "bond/types", 1, "extra/bond/per/atom", 6)
L.region("circle", "sphere", d/2.0+1.0, d/2.0/math.sqrt(3.0)+1, 0.0, d/2.0)
L.create_atoms(1, "region", "circle")
L.mass(1, 1.0)
L.velocity("all create 0.5 87287 loop geom")
L.velocity("all set", v, w, 0, "sum yes")
L.pair_style("lj/cut", 2.5)
L.pair_coeff(1, 1, 10.0, 1.0, 2.5)
L.bond_style("harmonic")
L.bond_coeff(1, 10.0, 1.2)
L.create_bonds("all", "all", 1, 1.0, 1.5)
L.neighbor(0.3, "bin")
L.neigh_modify("delay", 0, "every", 1, "check yes")
L.fix(1, "all", "nve")
L.fix(2, "all wall/lj93 xlo 0.0 1 1 2.5 xhi", x, "1 1 2.5")
L.fix(3, "all wall/lj93 ylo 0.0 1 1 2.5 yhi", y, "1 1 2.5")
L.image(zoom=1.8)
L.thermo_style("custom step temp epair press")
L.thermo(100)
output = L.run(40000)
L.image(zoom=1.8)
Explanation: Using LAMMPS with iPython and Jupyter
LAMMPS can be run interactively using iPython easily. This tutorial shows how to set this up.
Installation
Download the latest version of LAMMPS into a folder (we will calls this $LAMMPS_DIR from now on)
Compile LAMMPS as a shared library and enable PNG support
bash
cd $LAMMPS_DIR/src
make yes-molecule
python2 Make.py -m mpi -png -a file
make mode=shlib auto
Create a python virtualenv
bash
virtualenv testing
source testing/bin/activate
Inside the virtualenv install the lammps package
(testing) cd $LAMMPS_DIR/python
(testing) python install.py
(testing) cd # move to your working directory
Install jupyter and ipython in the virtualenv
bash
(testing) pip install ipython jupyter
Run jupyter notebook
bash
(testing) jupyter notebook
Example
End of explanation
L.system
L.system.natoms
L.system.nbonds
L.system.nbondtypes
L.communication
L.fixes
L.computes
L.dumps
L.groups
Explanation: Queries about LAMMPS simulation
End of explanation
L.variable("a index 2")
L.variables
L.variable("t equal temp")
L.variables
import sys
if sys.version_info < (3, 0):
# In Python 2 'print' is a restricted keyword, which is why you have to use the lmp_print function instead.
x = float(L.lmp_print('"${a}"'))
else:
# In Python 3 the print function can be redefined.
# x = float(L.print('"${a}"')")
# To avoid a syntax error in Python 2 executions of this notebook, this line is packed into an eval statement
x = float(eval("L.print('\"${a}\"')"))
x
L.variables['t'].value
L.eval("v_t/2.0")
L.variable("b index a b c")
L.variables['b'].value
L.eval("v_b")
L.variables['b'].definition
L.variable("i loop 10")
L.variables['i'].value
L.next("i")
L.variables['i'].value
L.eval("ke")
Explanation: Working with LAMMPS Variables
End of explanation
L.atoms[0]
[x for x in dir(L.atoms[0]) if not x.startswith('__')]
L.atoms[0].position
L.atoms[0].id
L.atoms[0].velocity
L.atoms[0].force
L.atoms[0].type
Explanation: Accessing Atom data
End of explanation |
13,980 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Relational database
A lot of data can be stored in tabels
Table Human
|ID|Name| Age |
|--|----|-----|
|1|Anton|34|
|2|Morten|37|
Table Course
|ID|Subject| Hours |
|--|----|-----|
|1|Python|30|
|2|Matlab|0|
Table Teacher (wrong)
|ID|HumanName|HumanAge|CourseSubject|CourseHours|
|--|----|-----|--|--|
|1|Anton|34|Python|30|
|2|Morten|37|Python|30|
Table Teacher (correct)
|ID|HumanID|CourseID|
|--|----|-----|
|1|1|1|
|2|2|1|
How to present structure of tables
Table Human
|Field|Type|
|--|----|
|ID|Integer|
|Name|String|
|Age|Integer|
Table Course
|Field|Type|
|--|----|
|ID|Integer|
|Subject|String|
|Hours|Integer|
Table Teacher
|Field|Type|
|--|----|
|ID|Integer|
|human|Integer|
|course|Integer|
How to define structure of tables in Python? Django Models!
Step1: How to use the Django Models?
1. Let's add data to database
Step2: 2. Now we can quit python and enter again
Step3: 3. Let's add more data and make a relation
Step4: Now the database contains
three tables
Step5: We can also search the database using a siple syntax | Python Code:
# models.py
from django.db import models
class Human(models.Model):
''' Description of any Human'''
name = models.CharField(max_length=200)
age = models.IntegerField()
objects = models.Manager()
def __str__(self):
''' Nicely print Human object '''
return u"I'm %s, %d years old" % (self.name, self.age)
class Course(models.Model):
''' Description of teaching course'''
subject = models.CharField(max_length=200)
hours = models.IntegerField()
objects = models.Manager()
def __str__(self):
''' Nicely print Course object '''
return 'Course in %s, %d hrs long' % (self.subject, self.hours)
class Teacher(models.Model):
''' Description of Teacher '''
human = models.ForeignKey(Human)
course = models.ForeignKey(Course)
objects = models.Manager()
def __str__(self):
''' Nicely print Teacher object '''
return '%s teaching %s' % (self.human.name, self.course.subject)
Explanation: Relational database
A lot of data can be stored in tabels
Table Human
|ID|Name| Age |
|--|----|-----|
|1|Anton|34|
|2|Morten|37|
Table Course
|ID|Subject| Hours |
|--|----|-----|
|1|Python|30|
|2|Matlab|0|
Table Teacher (wrong)
|ID|HumanName|HumanAge|CourseSubject|CourseHours|
|--|----|-----|--|--|
|1|Anton|34|Python|30|
|2|Morten|37|Python|30|
Table Teacher (correct)
|ID|HumanID|CourseID|
|--|----|-----|
|1|1|1|
|2|2|1|
How to present structure of tables
Table Human
|Field|Type|
|--|----|
|ID|Integer|
|Name|String|
|Age|Integer|
Table Course
|Field|Type|
|--|----|
|ID|Integer|
|Subject|String|
|Hours|Integer|
Table Teacher
|Field|Type|
|--|----|
|ID|Integer|
|human|Integer|
|course|Integer|
How to define structure of tables in Python? Django Models!
End of explanation
from our_project.models import Human
# create a Human and add data
h = Human()
h.name = 'Anton'
h.age = 34
# save data to database
h.save()
Explanation: How to use the Django Models?
1. Let's add data to database
End of explanation
from our_project.models import Human
# fetch all Humans from the database
humans = Human.objects.all()
# get the first one (the only one so far)
h0 = humans[0]
Explanation: 2. Now we can quit python and enter again
End of explanation
from our_project.models import Course, Teacher
# we create and save a course
c = Course()
c.subject = 'Python'
c.hours = 30
c.save()
# we create a teacher
t = Teacher()
# we create relations
t.human = h0
t.course = c
t.save()
Explanation: 3. Let's add more data and make a relation
End of explanation
teachers = Teacher.objects.all()
t0 = teachers[0]
print t0.human.name
print t0.human.age
print t0.course.subject
print t0.course.hours
Explanation: Now the database contains
three tables: Human, Course and Teacher
two relations: Teacher --> Human and Teacher --> Course
Let's fetch the data from the database.
End of explanation
theTeacher = Teacher.objects.find(human__name__contains='An')
theTeacher[0]
Explanation: We can also search the database using a siple syntax
End of explanation |
13,981 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A simple DNN model built in Keras.
In this notebook, we will use the ML datasets we read in with our Keras pipeline earlier and build our Keras DNN to predict the fare amount for NYC taxi cab rides.
Learning objectives
Review how to read in CSV file data using tf.data
Specify input, hidden, and output layers in the DNN architecture
Review and visualize the final DNN shape
Train the model locally and visualize the loss curves
Deploy and predict with the model using Cloud AI Platform
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
Step1: Locating the CSV files
We will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../../data
Step2: Use tf.data to read the CSV files
We wrote these cells in the third notebook of this sequence where we created a data pipeline with Keras.
First let's define our columns of data, which column we're predicting for, and the default values.
Step3: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
Step4: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
Step5: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
Step6: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
Step7: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
Step8: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
Step9: Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
Export the model for serving
Let's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
Step10: Deploy the model to AI Platform
Next, we will use the gcloud ai-platform command to create a new version for our taxifare model and give it the version name of dnn.
Deploying the model will take 5 - 10 minutes.
Step11: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
Predict with model using gcloud ai-platform predict
To predict with the model, we first need to create some data that the model hasn't seen before. Let's predict for a new taxi cab ride for you and two friends going from from Kips Bay and heading to Midtown Manhattan for a total distance of 1.3 miles. How much would that cost? | Python Code:
%%bash
export PROJECT=$(gcloud config list project --format "value(core.project)")
echo "Your current GCP Project Name is: "$PROJECT
import os, json, math
import numpy as np
import shutil
import tensorflow as tf
print("TensorFlow version: ",tf.version.VERSION)
PROJECT = "your-gcp-project-here" # REPLACE WITH YOUR PROJECT NAME
REGION = "us-central1" # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# Do not change these
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["BUCKET"] = PROJECT # DEFAULT BUCKET WILL BE PROJECT ID
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # SET TF ERROR LOG VERBOSITY
if PROJECT == "your-gcp-project-here":
print("Don't forget to update your PROJECT name! Currently:", PROJECT)
%%bash
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${PROJECT}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not re-create it. \n\nHere are your buckets:"
gsutil ls
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${PROJECT}
echo "\nHere are your current buckets:"
gsutil ls
fi
Explanation: A simple DNN model built in Keras.
In this notebook, we will use the ML datasets we read in with our Keras pipeline earlier and build our Keras DNN to predict the fare amount for NYC taxi cab rides.
Learning objectives
Review how to read in CSV file data using tf.data
Specify input, hidden, and output layers in the DNN architecture
Review and visualize the final DNN shape
Train the model locally and visualize the loss curves
Deploy and predict with the model using Cloud AI Platform
Each learning objective will correspond to a #TODO in this student lab notebook -- try to complete this notebook first and then review the solution notebook.
End of explanation
!ls -l ../../data/*.csv
Explanation: Locating the CSV files
We will start with the CSV files that we wrote out in the first notebook of this sequence. Just so you don't have to run the notebook, we saved a copy in ../../data
End of explanation
CSV_COLUMNS = ['fare_amount', 'pickup_datetime',
'pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'key']
# TODO 1: Specify the LABEL_COLUMN name you are predicting for below:
LABEL_COLUMN = ''
DEFAULTS = [[0.0],['na'],[0.0],[0.0],[0.0],[0.0],[0.0],['na']]
Explanation: Use tf.data to read the CSV files
We wrote these cells in the third notebook of this sequence where we created a data pipeline with Keras.
First let's define our columns of data, which column we're predicting for, and the default values.
End of explanation
def features_and_labels(row_data):
for unwanted_col in ['pickup_datetime', 'key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (
# TODO 1: Complete the four tf.data.experimental.make_csv_dataset options
# Choose from and correctly order: batch_size, CSV_COLUMNS, DEFAULTS, pattern
tf.data.experimental.make_csv_dataset() # <--- fill-in options
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
Explanation: Next, let's define our features we want to use and our label(s) and then load in the dataset for training.
End of explanation
## Build a simple Keras DNN using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model():
# TODO 2: Specify the five input columns
INPUT_COLS = []
# input layer
inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in INPUT_COLS
}
feature_columns = {
colname : tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
dnn_inputs = tf.keras.layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of [32, 8] just in like the BQML DNN
# TODO 2: Create two hidden layers [32,8] with relu activation. Name them h1 and h2
# Tip: Start with h1 = tf.keras.layers.dense
h1 = # complete
h2 = # complete
# final output is a linear activation because this is regression
# TODO 2: Create an output layer with linear activation and name it 'fare'
output =
# TODO 2: Use tf.keras.models.Model and create your model with inputs and output
model =
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our DNN architecture so far:\n")
model = build_dnn_model()
print(model.summary())
Explanation: Build a DNN with Keras
Now let's build the Deep Neural Network (DNN) model in Keras and specify the input and hidden layers. We will print out the DNN architecture and then visualize it later on.
End of explanation
# TODO 3: Use tf.keras.utils.plot_model() to create a dnn_model.png of your architecture
# Tip: For rank direction, choose Left Right (rankdir='LR')
Explanation: Visualize the DNN
We can visualize the DNN using the Keras plot_model utility.
End of explanation
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('../../data/taxi-train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('../../data/taxi-valid*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
# TODO 4: Pass in the correct parameters to train your model
history = model.fit(
)
Explanation: Train the model
To train the model, simply call model.fit().
Note that we should really use many more NUM_TRAIN_EXAMPLES (i.e. a larger dataset). We shouldn't make assumptions about the quality of the model based on training/evaluating it on a small sample of the full data.
End of explanation
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
Explanation: Visualize the model loss curve
Next, we will use matplotlib to draw the model's loss curves for training and validation.
End of explanation
model.predict({
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
}, steps=1)
Explanation: Predict with the model locally
To predict with Keras, you simply call model.predict() and pass in the cab ride you want to predict the fare amount for.
End of explanation
import shutil, os, datetime
OUTPUT_DIR = './export/savedmodel'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
!saved_model_cli show --tag_set serve --signature_def serving_default --dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
Explanation: Of course, this is not realistic, because we can't expect client code to have a model object in memory. We'll have to export our model to a file, and expect client code to instantiate the model from that exported file.
Export the model for serving
Let's export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc.
End of explanation
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=dnn
if [[ $(gcloud ai-platform models list --format='value(name)' | grep $MODEL_NAME) ]]; then
echo "The model named $MODEL_NAME already exists."
else
# create model
echo "Creating $MODEL_NAME model now."
gcloud ai-platform models create --regions=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' | grep $VERSION_NAME) ]]; then
echo "Deleting already the existing model $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
# TODO 5: Create the model using gcloud ai-platform predict
# Refer to: https://cloud.google.com/sdk/gcloud/reference/ai-platform/predict
gcloud ai-platform versions create # complete the missing parameters
Explanation: Deploy the model to AI Platform
Next, we will use the gcloud ai-platform command to create a new version for our taxifare model and give it the version name of dnn.
Deploying the model will take 5 - 10 minutes.
End of explanation
%%writefile input.json
{"pickup_longitude": -73.982683, "pickup_latitude": 40.742104,"dropoff_longitude": -73.983766,"dropoff_latitude": 40.755174,"passenger_count": 3.0}
!gcloud ai-platform predict --model taxifare --json-instances input.json --version dnn
Explanation: Monitor the model creation at GCP Console > AI Platform and once the model version dnn is created, proceed to the next cell.
Predict with model using gcloud ai-platform predict
To predict with the model, we first need to create some data that the model hasn't seen before. Let's predict for a new taxi cab ride for you and two friends going from from Kips Bay and heading to Midtown Manhattan for a total distance of 1.3 miles. How much would that cost?
End of explanation |
13,982 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Scrivere su file
Step1: Unicode in breve
mettete sempre la u prima delle stringhe (u"")
v. unicode HowTO nelle references sotto
Step2: Leggere e scrivere files con contenuti Unicode
Step3: Class e instance
Step4: DB-API
Step5: Esercizio DB-API
Fai in modo che il tuo gestionale
Step6: References python 2.7
IL Tutorial
Coding Style
Le funzioni builtins (su ipython >>> help(__builtins__))
Gli HowTo (tra cui "Unicode HowTo" e "Idioms and Anti-Idioms in Python")
The absolute minimum every developer must know about unicode and character sets no excuses!
Pragmatic Unicode
Tipi, operatori e comparazioni
String format mini-language
DB-API 2.0 (PEP 249)
Scope delle variabili (blog)
Test con PyTest
Feedback sono graditi
privati via [email protected] , cell | Python Code:
with open(fname, "wb") as f:
f.write(data)
with open(fname, "rb") as f:
rows = f.readlines(data)
Explanation: Scrivere su file
End of explanation
u"Papa" + u"aè"
Explanation: Unicode in breve
mettete sempre la u prima delle stringhe (u"")
v. unicode HowTO nelle references sotto
End of explanation
import codecs
with codecs.open('unicode.rst', encoding='utf-8', mode='w+') as f:
f.write(u'\u4500 blah blah blah\n')
f.seek(0)
print repr(f.readline()[:1])
with codecs.open('unicode.rst', encoding='utf-8') as f:
for line in f:
print ("AAA", line)
The most important tip is:
Software should only work with Unicode strings internally, converting to a particular encoding on output.
Explanation: Leggere e scrivere files con contenuti Unicode
End of explanation
class Greeter(object):
def __init__(self, name, s="Ferroni"):
self.name = name
self._surname = s
def hello(self):
print("Hello {0.name}".format(self))
class SayGoodBye(object):
def __init__(self, name, s="Ferroni"):
self.name = name
self._surname = s
def bye(self):
print("Bye {0.name}".format(self))
def hello(self):
print("Baaaaaaye {0.name}".format(self))
class HelloBye(SayGoodBye, Greeter):
def __init__(self, *args, **kwargs):
Greeter.__init__(self, *args, **kwargs)
def hello(self):
return Greeter.hello(self)
hb = HelloBye("Lucaaaa")
hb.bye()
hb.hello()
class Greeter(object):
def __init__(self, name, s="Ferroni"):
self.name = name
self._surname = s
def hello(self):
print("Hello {0.name}".format(self))
class SayGoodByeFabriano(Greeter):
def __init__(self, name="Luca"):
super(SayGoodByeFabriano, self).__init__(name)
self.city = "Fabriano"
def bye(self):
print("Bye from {0.city} {0.name}".format(self))
def hello(self):
super(SayGoodByeFabriano, self).hello()
print("Hello from {0.city} {0.name}".format(self))
hb = SayGoodByeFabriano()
hb.bye()
hb.hello()
import json
import csv
people_fname = r"c:\Users\gigi\lessons-python4beginners\src\gestionale\people_fixture.json"
with open(people_fname, "rb") as fpeople:
PEOPLE = json.load(fpeople)
outfname = r"c:\Users\gigi\a.txt"
class DebugExporter(object):
def do_export(self, f, rows):
for row in rows:
print("{}\n".format(row))
class Exporter(object):
def do_export(self, f, rows):
for row in rows:
f.write("{}\n".format(row))
class JsonExporter(object):
def do_export(self, f, rows):
json.dump(rows, f, indent=2)
class CsvExporter(object):
def do_export(self, f, rows):
fieldnames = rows[0].keys()
writer = csv.DictWriter(
f, fieldnames = fieldnames, delimiter = ";")
writer.writeheader()
for row in rows:
writer.writerow(row)
def apply_exportation(xp, fname, data):
with open(fname, "wb") as f:
xp.do_export(f, rows=data)
xp = JsonExporter()
apply_exportation(xp, outfname, data=PEOPLE)
xpcsv = CsvExporter()
csvfname = outfname.replace(".txt",".csv")
apply_exportation(xpcsv, csvfname, data=PEOPLE)
Explanation: Class e instance
End of explanation
# [PEP 249](https://www.python.org/dev/peps/pep-0249/)
# v. gestionale/managers02/db.py
class SqliteDBManager(object):
def _do_export(self, rows):
cu = self.conn.cursor()
# KO: Never do this -- insecure!
# KO: for row in rows:
# KO: c.execute("INSERT INTO people VALUES ('{name}','{city}','{salary}')".format(**row))
# Do this instead
for row in rows:
t = (row["name"], row["city"], row["salary"])
cu.execute('INSERT INTO people VALUES (?,?,?)', t)
self.conn.commit()
Explanation: DB-API
End of explanation
import sqlite3 as db
def create_db(dsn):
conn = db.connect(dsn)
cu = conn.cursor()
cu.execute("CREATE TABLE PEOPLE (name VARCHAR(32), city VARCHAR(32), salary INTEGER)")
conn.close()
def save_db(fname, rows):
conn = db.connect(fname)
conn.row_factory = db.Row # cursore indicizzato anche per chiave
# è possibile definire una propria classe factory
# https://docs.python.org/2.7/library/sqlite3.html?highlight=sqlite3#sqlite3.Connection.row_factory
cu = conn.cursor()
for row in rows:
t = list(row[k] for k in ("name", "city", "salary"))
cu.execute("INSERT INTO PEOPLE VALUES (?,?,?)", t)
conn.commit()
conn.close()
# create_db("mydatabase.db")
Explanation: Esercizio DB-API
Fai in modo che il tuo gestionale:
a. inizializzi un tabella PEOPLE su un database sqlite3
b. vi esporti i dati di PEOPLE
c. inoltre possa registrare in un'altra tabella i dettagli "name" e "age" dei conigli posseduti
BONUS: Importa un file json (o altro) all'avvio del tuo programma in modo da precaricare un set di dati in PEOPLE
End of explanation
phrase = "happy-python-hacking!"
the_end = u" ".join([s.capitalize() for s in phrase.split("-")])
Explanation: References python 2.7
IL Tutorial
Coding Style
Le funzioni builtins (su ipython >>> help(__builtins__))
Gli HowTo (tra cui "Unicode HowTo" e "Idioms and Anti-Idioms in Python")
The absolute minimum every developer must know about unicode and character sets no excuses!
Pragmatic Unicode
Tipi, operatori e comparazioni
String format mini-language
DB-API 2.0 (PEP 249)
Scope delle variabili (blog)
Test con PyTest
Feedback sono graditi
privati via [email protected] , cell: 3289639660
via telegram (sempre con il numero di cell)
se volete farli pubblici su www.befair.it
End of explanation |
13,983 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 09b
Step1: In this lab, we're going to cluster documents by the similarity of their text content. For this, we'll need to download some documents to cluster. The following dictionary maps the names of various texts to their corresponding URLs at Project Gutenberg.
Step2: Next, we need to download the texts located at the URLs. We can do this using Python's urllib2 package, which is part of the standard Python library. The following code will download the content of each URL and store it in the documents dictionary
Step3: Finally, we can create a pandas data frame to represent our document data
Step4: Data modelling
Let's build an agglomerative clustering model of the document data. As with $K$-means clustering, scikit-learn supports agglomerative clustering functionality via the cluster subpackage. We can use the AgglomerativeClustering class to build our model.
As with other scikit-learn estimators, AgglomerativeClustering accepts a number of different hyperparameters. We can get a list of these modelling parameters using the get_params method of the estimator (this works on any scikit-learn estimator), like this
Step5: You can find a more detailed description of each parameter in the scikit-learn documentation.
As our data is in text format, we'll need to convert it into a numerical representation so that it can be understood by the clustering algorithm. One way to do this is by converting the document texts into vectors of TF-IDF scores, just as we did when building the spam classifier. This way, the clustering algorithm will identify documents with similar TF-IDF score vectors. This should result in clusters containing documents with similar text content, because if two documents have similar TF-IDF vectors, then they must contain the same words, occurring with the same frequencies.
Note
Step6: Once we've fitted the data to the pipeline, we can extract the fitted agglomerative clustering model to see what clusters were formed. To extract the model, we can use the named_steps attribute of the pipeline, which is a dictionary mapping the names (in lowercase) of each stage in the pipeline to the corresponding models.
Step7: As can be seen, our clustering model is stored under the key 'agglomerativeclustering', and so we can extract it as follows
Step8: Currently, scikit-learn does not support plotting dendrograms out of the box. However, the authors have provided the following code snippet for anyone who wants to do so
Step9: Finally, we can call the plot_dendrogram function to plot a dendrogram of our model, as follows | Python Code:
%matplotlib inline
import pandas as pd
import urllib2
from sklearn.cluster import AgglomerativeClustering
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import FunctionTransformer
Explanation: Lab 09b: Agglomerative clustering
Introduction
This lab focuses on agglomerative clustering for determining the similarity of documents. At the end of the lab, you should be able to:
Create an agglomerative clustering model.
Plot a dendrogram of the cluster levels.
Getting started
Let's start by importing the packages we'll need. As usual, we'll import pandas for exploratory analysis, but this week we're also going to use the cluster subpackage from scikit-learn to create agglomerative clustering models and the standard Python package urllib2 to download documents from Project Gutenberg.
End of explanation
urls = {
'The Iliad - Homer': 'https://www.gutenberg.org/cache/epub/1727/pg1727.txt',
'The Odyssey - Homer': 'https://www.gutenberg.org/cache/epub/1727/pg1727.txt',
'Romeo and Juliet - William Shakespeare': 'https://www.gutenberg.org/cache/epub/1112/pg1112.txt',
'Hamlet - William Shakespeare': 'https://www.gutenberg.org/files/1524/1524-0.txt',
'Adventures of Huckleberry Finn - Mark Twain': 'https://www.gutenberg.org/files/76/76-0.txt',
'The Adventures of Tom Sawyer - Mark Twain': 'https://www.gutenberg.org/files/74/74-0.txt',
'A Tale of Two Cities - Charles Dickens': 'https://www.gutenberg.org/files/98/98-0.txt',
'Great Expectations - Charles Dickens': 'https://www.gutenberg.org/files/1400/1400-0.txt',
'Oliver Twist - Charles Dickens': 'https://www.gutenberg.org/cache/epub/730/pg730.txt',
'The Adventures of Sherlock Holmes - Arthur Conan Doyle': 'https://www.gutenberg.org/cache/epub/1661/pg1661.txt'
}
Explanation: In this lab, we're going to cluster documents by the similarity of their text content. For this, we'll need to download some documents to cluster. The following dictionary maps the names of various texts to their corresponding URLs at Project Gutenberg.
End of explanation
documents = {}
for name, url in urls.items():
response = urllib2.urlopen(url)
document = response.read()
documents[name] = document
Explanation: Next, we need to download the texts located at the URLs. We can do this using Python's urllib2 package, which is part of the standard Python library. The following code will download the content of each URL and store it in the documents dictionary:
End of explanation
df = pd.DataFrame([documents[name] for name in sorted(documents)], index=sorted(documents), columns=['text'])
df.head(10)
Explanation: Finally, we can create a pandas data frame to represent our document data:
End of explanation
AgglomerativeClustering().get_params()
Explanation: Data modelling
Let's build an agglomerative clustering model of the document data. As with $K$-means clustering, scikit-learn supports agglomerative clustering functionality via the cluster subpackage. We can use the AgglomerativeClustering class to build our model.
As with other scikit-learn estimators, AgglomerativeClustering accepts a number of different hyperparameters. We can get a list of these modelling parameters using the get_params method of the estimator (this works on any scikit-learn estimator), like this:
End of explanation
X = df['text']
# Construct a pipeline: TF-IDF -> Sparse to Dense -> Clustering
pipeline = make_pipeline(
TfidfVectorizer(stop_words='english'),
FunctionTransformer(lambda x: x.todense(), accept_sparse=True),
AgglomerativeClustering(linkage='average') # Use average linkage
)
pipeline = pipeline.fit(X)
Explanation: You can find a more detailed description of each parameter in the scikit-learn documentation.
As our data is in text format, we'll need to convert it into a numerical representation so that it can be understood by the clustering algorithm. One way to do this is by converting the document texts into vectors of TF-IDF scores, just as we did when building the spam classifier. This way, the clustering algorithm will identify documents with similar TF-IDF score vectors. This should result in clusters containing documents with similar text content, because if two documents have similar TF-IDF vectors, then they must contain the same words, occurring with the same frequencies.
Note: Comparing TF-IDF score vectors is one - but not the only - way to determine whether documents have similar content.
As with the spam classification example, we can use a pipeline to connect the TfidfVectorizer to the AgglomerativeClustering algorithm. Because of a snag in the way scikit-learn is coded, the AgglomerativeClustering class only accepts dense matrices as inputs and, unfortunately, TfidfVectorizer produces sparse matrix output. However, this is easily recified by inserting a FunctionTransformer (essentially, a custom function) between the two that converts the sparse input to dense input.
The code specifying the pipeline and fitting the data is shown below. Note that, as with $K$-means clustering, agglomerative clustering is an unsupervised learning algorithm, and so we don't need to specify a target variable ($y$) when fitting the model.
End of explanation
pipeline.named_steps
Explanation: Once we've fitted the data to the pipeline, we can extract the fitted agglomerative clustering model to see what clusters were formed. To extract the model, we can use the named_steps attribute of the pipeline, which is a dictionary mapping the names (in lowercase) of each stage in the pipeline to the corresponding models.
End of explanation
model = pipeline.named_steps['agglomerativeclustering']
Explanation: As can be seen, our clustering model is stored under the key 'agglomerativeclustering', and so we can extract it as follows:
End of explanation
# Original source: https://github.com/scikit-learn/scikit-learn/blob/70cf4a676caa2d2dad2e3f6e4478d64bcb0506f7/examples/cluster/plot_hierarchical_clustering_dendrogram.py
import numpy as np
from scipy.cluster.hierarchy import dendrogram
def plot_dendrogram(model, **kwargs):
# Children of hierarchical clustering
children = model.children_
# Distances between each pair of children
# Since we don't have this information, we can use a uniform one for plotting
distance = np.arange(children.shape[0])
# The number of observations contained in each cluster level
no_of_observations = np.arange(2, children.shape[0] + 2)
# Create linkage matrix and then plot the dendrogram
linkage_matrix = np.column_stack([children, distance, no_of_observations]).astype(float)
# Plot the corresponding dendrogram
dendrogram(linkage_matrix, **kwargs)
Explanation: Currently, scikit-learn does not support plotting dendrograms out of the box. However, the authors have provided the following code snippet for anyone who wants to do so:
End of explanation
plot_dendrogram(model, labels=X.index, orientation='right')
Explanation: Finally, we can call the plot_dendrogram function to plot a dendrogram of our model, as follows:
End of explanation |
13,984 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div style='background-image
Step1: 1. Initialization of setup
Step2: 2. The Mass Matrix
Now we initialize the mass and stiffness matrices. In general, the mass matrix at the elemental level is given
\begin{equation}
M_{ji}^e \ = \ w_j \ \rho (\xi) \ \frac{\mathrm{d}x}{\mathrm{d}\xi} \delta_{ij} \vert_ {\xi = \xi_j}
\end{equation}
Exercise 1
Implement the mass matrix using the integration weights at GLL locations $w$, the jacobian $J$, and density $\rho$. Then, perform the global assembly of the mass matrix, compute its inverse, and display the inverse mass matrix to visually inspect how it looks like.
Step3: 3. The Stiffness matrix
On the other hand, the general form of the stiffness matrix at the elemtal level is
\begin{equation}
K_{ji}^e \ = \ \sum_{k = 1}^{N+1} w_k \mu (\xi) \partial_\xi \ell_j (\xi) \partial_\xi \ell_i (\xi) \left(\frac{\mathrm{d}\xi}{\mathrm{d}x} \right)^2 \frac{\mathrm{d}x}{\mathrm{d}\xi} \vert_{\xi = \xi_k}
\end{equation}
Exercise 2
Implement the stiffness matrix using the integration weights at GLL locations $w$, the jacobian $J$, and shear stress $\mu$. Then, perform the global assembly of the mass matrix and display the matrix to visually inspect how it looks like.
Step4: 4. Finite element solution
Finally we implement the spectral element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation} | Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib
# Show Plot in The Notebook
matplotlib.use("nbagg")
import matplotlib.pyplot as plt
from gll import gll
from lagrange1st import lagrange1st
from ricker import ricker
Explanation: <div style='background-image: url("../../share/images/header.svg") ; padding: 0px ; background-size: cover ; border-radius: 5px ; height: 250px'>
<div style="float: right ; margin: 50px ; padding: 20px ; background: rgba(255 , 255 , 255 , 0.7) ; width: 50% ; height: 150px">
<div style="position: relative ; top: 50% ; transform: translatey(-50%)">
<div style="font-size: xx-large ; font-weight: 900 ; color: rgba(0 , 0 , 0 , 0.8) ; line-height: 100%">Computational Seismology</div>
<div style="font-size: large ; padding-top: 20px ; color: rgba(0 , 0 , 0 , 0.5)">Spectral Element Method - 1D Elastic Wave Equation</div>
</div>
</div>
</div>
<p style="width:20%;float:right;padding-left:50px">
<img src=../../share/images/book.jpg>
<span style="font-size:smaller">
</span>
</p>
This notebook is part of the supplementary material
to Computational Seismology: A Practical Introduction,
Oxford University Press, 2016.
Authors:
David Vargas (@dvargas)
Heiner Igel (@heinerigel)
Basic Equations
This notebook presents the numerical solution for the 1D elastic wave equation
\begin{equation}
\rho(x) \partial_t^2 u(x,t) = \partial_x (\mu(x) \partial_x u(x,t)) + f(x,t),
\end{equation}
using the spectral element method. This is done after a series of steps summarized as follow:
1) The wave equation is written into its Weak form
2) Apply stress Free Boundary Condition after integration by parts
3) Approximate the wave field as a linear combination of some basis
\begin{equation}
u(x,t) \ \approx \ \overline{u}(x,t) \ = \ \sum_{i=1}^{n} u_i(t) \ \varphi_i(x)
\end{equation}
4) Use the same basis functions in $u(x, t)$ as test functions in the weak form, the so call Galerkin principle.
6) The continuous weak form is written as a system of linear equations by considering the approximated displacement field.
\begin{equation}
\mathbf{M}^T\partial_t^2 \mathbf{u} + \mathbf{K}^T\mathbf{u} = \mathbf{f}
\end{equation}
7) Time extrapolation with centered finite differences scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation}
where $\mathbf{M}$ is known as the mass matrix, and $\mathbf{K}$ the stiffness matrix.
The above solution is exactly the same presented for the classic finite-element method. Now we introduce appropriated basis functions and integration scheme to efficiently solve the system of matrices.
Interpolation with Lagrange Polynomials
At the elemental level (see section 7.4), we introduce as interpolating functions the Lagrange polynomials and use $\xi$ as the space variable representing our elemental domain:
\begin{equation}
\varphi_i \ \rightarrow \ \ell_i^{(N)} (\xi) \ := \ \prod_{j \neq i}^{N+1} \frac{\xi - \xi_j}{\xi_i-\xi_j}, \qquad i,j = 1, 2, \dotsc , N + 1
\end{equation}
Numerical Integration
The integral of a continuous function $f(x)$ can be calculated after replacing $f(x)$ by a polynomial approximation that can be integrated analytically. As interpolating functions we use again the Lagrange polynomials and
obtain Gauss-Lobatto-Legendre quadrature. Here, the GLL points are used to perform the integral.
\begin{equation}
\int_{-1}^1 f(x) \ dx \approx \int {-1}^1 P_N(x) dx = \sum{i=1}^{N+1}
w_i f(x_i)
\end{equation}
End of explanation
# Initialization of setup
# ---------------------------------------------------------------
nt = 10000 # number of time steps
xmax = 10000. # Length of domain [m]
vs = 2500. # S velocity [m/s]
rho = 2000 # Density [kg/m^3]
mu = rho * vs**2 # Shear modulus mu
N = 3 # Order of Lagrange polynomials
ne = 250 # Number of elements
Tdom = .2 # Dominant period of Ricker source wavelet
iplot = 20 # Plotting each iplot snapshot
# variables for elemental matrices
Me = np.zeros(N+1, dtype = float)
Ke = np.zeros((N+1, N+1), dtype = float)
# ----------------------------------------------------------------
# Initialization of GLL points integration weights
[xi, w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points
# w Integration weights at GLL locations
# Space domain
le = xmax/ne # Length of elements
# Vector with GLL points
k = 0
xg = np.zeros((N*ne)+1)
xg[k] = 0
for i in range(1,ne+1):
for j in range(0,N):
k = k+1
xg[k] = (i-1)*le + .5*(xi[j+1]+1)*le
# ---------------------------------------------------------------
dxmin = min(np.diff(xg))
eps = 0.1 # Courant value
dt = eps*dxmin/vs # Global time step
# Mapping - Jacobian
J = le/2
Ji = 1/J # Inverse Jacobian
# 1st derivative of Lagrange polynomials
l1d = lagrange1st(N) # Array with GLL as columns for each N+1 polynomial
Explanation: 1. Initialization of setup
End of explanation
#################################################################
# IMPLEMENT THE MASS MATRIX HERE!
#################################################################
#################################################################
# PERFORM THE GLOBAL ASSEMBLY OF M HERE!
#################################################################
#################################################################
# COMPUTE THE INVERSE MASS MATRIX HERE!
#################################################################
#################################################################
# DISPLAY THE INVERSE MASS MATRIX HERE!
#################################################################
Explanation: 2. The Mass Matrix
Now we initialize the mass and stiffness matrices. In general, the mass matrix at the elemental level is given
\begin{equation}
M_{ji}^e \ = \ w_j \ \rho (\xi) \ \frac{\mathrm{d}x}{\mathrm{d}\xi} \delta_{ij} \vert_ {\xi = \xi_j}
\end{equation}
Exercise 1
Implement the mass matrix using the integration weights at GLL locations $w$, the jacobian $J$, and density $\rho$. Then, perform the global assembly of the mass matrix, compute its inverse, and display the inverse mass matrix to visually inspect how it looks like.
End of explanation
#################################################################
# IMPLEMENT THE STIFFNESS MATRIX HERE!
#################################################################
#################################################################
# PERFORM THE GLOBAL ASSEMBLY OF K HERE!
#################################################################
#################################################################
# DISPLAY THE STIFFNESS MATRIX HERE!
#################################################################
Explanation: 3. The Stiffness matrix
On the other hand, the general form of the stiffness matrix at the elemtal level is
\begin{equation}
K_{ji}^e \ = \ \sum_{k = 1}^{N+1} w_k \mu (\xi) \partial_\xi \ell_j (\xi) \partial_\xi \ell_i (\xi) \left(\frac{\mathrm{d}\xi}{\mathrm{d}x} \right)^2 \frac{\mathrm{d}x}{\mathrm{d}\xi} \vert_{\xi = \xi_k}
\end{equation}
Exercise 2
Implement the stiffness matrix using the integration weights at GLL locations $w$, the jacobian $J$, and shear stress $\mu$. Then, perform the global assembly of the mass matrix and display the matrix to visually inspect how it looks like.
End of explanation
# SE Solution, Time extrapolation
# ---------------------------------------------------------------
# initialize source time function and force vector f
src = ricker(dt,Tdom)
isrc = int(np.floor(ng/2)) # Source location
# Initialization of solution vectors
u = np.zeros(ng)
uold = u
unew = u
f = u
# Initialize animated plot
# ---------------------------------------------------------------
plt.figure(figsize=(10,6))
lines = plt.plot(xg, u, lw=1.5)
plt.title('SEM 1D Animation', size=16)
plt.xlabel(' x (m)')
plt.ylabel(' Amplitude ')
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for it in range(nt):
# Source initialization
f= np.zeros(ng)
if it < len(src):
f[isrc-1] = src[it-1]
# Time extrapolation
unew = dt**2 * Minv @ (f - K @ u) + 2 * u - uold
uold, u = u, unew
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in lines:
l.remove()
del l
# --------------------------------------
# Display lines
lines = plt.plot(xg, u, color="black", lw = 1.5)
plt.gcf().canvas.draw()
Explanation: 4. Finite element solution
Finally we implement the spectral element solution using the computed mass $M$ and stiffness $K$ matrices together with a finite differences extrapolation scheme
\begin{equation}
\mathbf{u}(t + dt) = dt^2 (\mathbf{M}^T)^{-1}[\mathbf{f} - \mathbf{K}^T\mathbf{u}] + 2\mathbf{u} - \mathbf{u}(t-dt).
\end{equation}
End of explanation |
13,985 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Response Files With The Sherpa API
We're going to see if we can get the Sherpa API to allow us to apply ARFs and RMFs to arbitrary models.
Note
Step1: Playing with the Convenience Functions
First, we're going to see how we can access ARF and RMF from the convenience functions.
Let's set up a data set
Step2: Load in the data with the convenience function
Step3: If there is a grouping, get rid of it, because we don't like groupings (except for Mike Nowak).
Step4: This method gets the data and stores it in an object
Step5: In case we need them for something, this is how we get ARF and RMF objects
Step6: Next, we'd like to play around with a model.
Let's set this up based on the XSPEC model I got from Jack
Step7: We can get the fully specified model and store it in an object like this
Step8: Here's how you can set parameters. Note that this changes the state of the object (boo!)
Step9: Actually, we'd like to change the state of the object directly rather than using the convenience function, which works like this
Step10: Now we're ready to evaluate the model and apply RMF/ARF to it. This is actually a method on the data object, not the model object. It returns an array
Step11: Let's plot the results
Step12: Let's set the model parameters to the fit results from XSPEC
Step13: MCMC by hand
Just for fun, we're going to use emcee directly to sample from this model.
Let's first define a posterior object
Step14: Now we can define a posterior object with the data and model objects
Step15: Can we compute the posterior probability of some parameters?
Step16: The one below should fail, because it's outside the prior
Step17: Okay, cool! This works.
Now we can run MCMC! | Python Code:
%matplotlib notebook
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
Explanation: Response Files With The Sherpa API
We're going to see if we can get the Sherpa API to allow us to apply ARFs and RMFs to arbitrary models.
Note: I needed to run heainit (the heasoft initialization file) to get the XSPEC models to run!
Random bits and pieces:
There is a convenience function set_full_model in sherpa (for 1D or 2D). In this case, can put RMF or ARF into the expression.
End of explanation
import sherpa.astro.ui
Explanation: Playing with the Convenience Functions
First, we're going to see how we can access ARF and RMF from the convenience functions.
Let's set up a data set:
End of explanation
sherpa.astro.ui.load_data("../data/Chandra/js_spec_HI1_IC10X1_5asB1_jsgrp.pi")
Explanation: Load in the data with the convenience function:
End of explanation
sherpa.astro.ui.ungroup()
Explanation: If there is a grouping, get rid of it, because we don't like groupings (except for Mike Nowak).
End of explanation
d = sherpa.astro.ui.get_data()
Explanation: This method gets the data and stores it in an object:
End of explanation
arf = d.get_arf()
rmf = d.get_rmf()
Explanation: In case we need them for something, this is how we get ARF and RMF objects:
End of explanation
sherpa.astro.ui.set_xsabund("angr")
sherpa.astro.ui.set_xsxsect("bcmc")
sherpa.astro.ui.set_xscosmo(70,0,0.73)
sherpa.astro.ui.set_xsxset("delta", "0.01")
sherpa.astro.ui.set_model("xstbabs.a1*xsdiskbb.a2")
print(sherpa.astro.ui.get_model())
Explanation: Next, we'd like to play around with a model.
Let's set this up based on the XSPEC model I got from Jack:
End of explanation
m = sherpa.astro.ui.get_model()
Explanation: We can get the fully specified model and store it in an object like this:
End of explanation
sherpa.astro.ui.set_par(a1.nH,0.01)
Explanation: Here's how you can set parameters. Note that this changes the state of the object (boo!)
End of explanation
m._set_thawed_pars([0.01, 2, 0.01])
Explanation: Actually, we'd like to change the state of the object directly rather than using the convenience function, which works like this:
End of explanation
model_counts = d.eval_model(m)
Explanation: Now we're ready to evaluate the model and apply RMF/ARF to it. This is actually a method on the data object, not the model object. It returns an array:
End of explanation
plt.figure()
plt.plot(rmf.e_min, d.counts)
plt.plot(rmf.e_min, model_counts)
Explanation: Let's plot the results:
End of explanation
m._set_thawed_pars([0.313999, 1.14635, 0.0780871])
model_counts = d.eval_model(m)
plt.figure()
plt.plot(rmf.e_min, d.counts)
plt.plot(rmf.e_min, model_counts, lw=3)
Explanation: Let's set the model parameters to the fit results from XSPEC:
End of explanation
from scipy.special import gamma as scipy_gamma
from scipy.special import gammaln as scipy_gammaln
logmin = -100000000.0
class PoissonPosterior(object):
def __init__(self, d, m):
self.data = d
self.model = m
return
def loglikelihood(self, pars, neg=False):
self.model._set_thawed_pars(pars)
mean_model = d.eval_model(self.model)
#stupid hack to make it not go -infinity
mean_model += np.exp(-20.)
res = np.nansum(-mean_model + self.data.counts*np.log(mean_model) \
- scipy_gammaln(self.data.counts + 1.))
if not np.isfinite(res):
res = logmin
if neg:
return -res
else:
return res
def logprior(self, pars):
nh = pars[0]
p_nh = ((nh > 0.0) & (nh < 10.0))
tin = pars[1]
p_tin = ((tin > 0.0) & (tin < 5.0))
lognorm = np.log(pars[2])
p_norm = ((lognorm > -10.0) & (lognorm < 10.0))
logp = np.log(p_nh*p_tin*p_norm)
if not np.isfinite(logp):
return logmin
else:
return logp
def logposterior(self, pars, neg=False):
lpost = self.loglikelihood(pars) + self.logprior(pars)
if neg is True:
return -lpost
else:
return lpost
def __call__(self, pars, neg=False):
return self.logposterior(pars, neg)
Explanation: MCMC by hand
Just for fun, we're going to use emcee directly to sample from this model.
Let's first define a posterior object:
End of explanation
lpost = PoissonPosterior(d, m)
Explanation: Now we can define a posterior object with the data and model objects:
End of explanation
print(lpost([0.1, 0.1, 0.1]))
print(lpost([0.313999, 1.14635, 0.0780871]))
Explanation: Can we compute the posterior probability of some parameters?
End of explanation
print(lpost([-0.1, 0.1, 0.1]))
print(lpost([0.1, -0.1, 0.1]))
print(lpost([0.1, 0.1, -0.1]))
Explanation: The one below should fail, because it's outside the prior:
End of explanation
import emcee
start_pars = np.array([0.313999, 1.14635, 0.0780871])
start_cov = np.diag(start_pars/100.0)
nwalkers = 100
niter = 200
ndim = len(start_pars)
burnin = 50
p0 = np.array([np.random.multivariate_normal(start_pars, start_cov) for
i in range(nwalkers)])
# initialize the sampler
sampler = emcee.EnsembleSampler(nwalkers, ndim, lpost, args=[False], threads=4)
pos, prob, state = sampler.run_mcmc(p0, burnin)
_, _, _ = sampler.run_mcmc(pos, niter, rstate0=state)
plt.figure()
plt.plot(sampler.flatchain[:,0])
plt.figure()
plt.plot(sampler.flatchain[:,1])
plt.figure()
plt.plot(sampler.flatchain[:,2])
import corner
%matplotlib inline
corner.corner(sampler.flatchain,
quantiles=[0.16, 0.5, 0.84],
show_titles=True, title_args={"fontsize": 12});
Explanation: Okay, cool! This works.
Now we can run MCMC!
End of explanation |
13,986 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Clustering
Step1: Introducing K-Means
K Means is an algorithm for unsupervised clustering
Step2: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
Step3: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm
Step4: This algorithm will (often) converge to the optimal cluster centers.
KMeans Caveats
The convergence of this algorithm is not guaranteed; for that reason, scikit-learn by default uses a large number of random initializations and finds the best results.
Also, the number of clusters must be set beforehand... there are other clustering algorithms for which this requirement may be lifted.
Application of KMeans to Digits
For a closer-to-real-world example, let's again take a look at the digits data. Here we'll use KMeans to automatically cluster the data in 64 dimensions, and then look at the cluster centers to see what the algorithm has found.
Step5: We see ten clusters in 64 dimensions. Let's visualize each of these cluster centers to see what they represent
Step6: We see that even without the labels, KMeans is able to find clusters whose means are recognizable digits (with apologies to the number 8)!
The cluster labels are permuted; let's fix this
Step7: For good measure, let's use our PCA visualization and look at the true cluster labels and K-means cluster labels
Step8: Just for kicks, let's see how accurate our K-Means classifier is with no label information
Step9: 80% – not bad! Let's check-out the confusion matrix for this
Step10: Again, this is an 80% classification accuracy for an entirely unsupervised estimator which knew nothing about the labels.
Example
Step11: The image itself is stored in a 3-dimensional array, of size (height, width, RGB)
Step12: We can envision this image as a cloud of points in a 3-dimensional color space. We'll rescale the colors so they lie between 0 and 1, then reshape the array to be a typical scikit-learn input
Step13: We now have 273,280 points in 3 dimensions.
Our task is to use KMeans to compress the $256^3$ colors into a smaller number (say, 64 colors). Basically, we want to find $N_{color}$ clusters in the data, and create a new image where the true input color is replaced by the color of the closest cluster.
Here we'll use MiniBatchKMeans, a more sophisticated estimator that performs better for larger datasets | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
plt.style.use('seaborn')
Explanation: <small><i>This notebook was put together by Jake Vanderplas. Source and license info is on GitHub.</i></small>
Clustering: K-Means In-Depth
Here we'll explore K Means Clustering, which is an unsupervised clustering technique.
We'll start with our standard set of initial imports
End of explanation
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=300, centers=4,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], s=50);
Explanation: Introducing K-Means
K Means is an algorithm for unsupervised clustering: that is, finding clusters in data based on the data attributes alone (not the labels).
K Means is a relatively easy-to-understand algorithm. It searches for cluster centers which are the mean of the points within them, such that every point is closest to the cluster center it is assigned to.
Let's look at how KMeans operates on the simple clusters we looked at previously. To emphasize that this is unsupervised, we'll not plot the colors of the clusters:
End of explanation
from sklearn.cluster import KMeans
est = KMeans(4) # 4 clusters
est.fit(X)
y_kmeans = est.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=y_kmeans, s=50, cmap='rainbow');
Explanation: By eye, it is relatively easy to pick out the four clusters. If you were to perform an exhaustive search for the different segmentations of the data, however, the search space would be exponential in the number of points. Fortunately, there is a well-known Expectation Maximization (EM) procedure which scikit-learn implements, so that KMeans can be solved relatively quickly.
End of explanation
from fig_code import plot_kmeans_interactive
plot_kmeans_interactive();
Explanation: The algorithm identifies the four clusters of points in a manner very similar to what we would do by eye!
The K-Means Algorithm: Expectation Maximization
K-Means is an example of an algorithm which uses an Expectation-Maximization approach to arrive at the solution.
Expectation-Maximization is a two-step approach which works as follows:
Guess some cluster centers
Repeat until converged
A. Assign points to the nearest cluster center
B. Set the cluster centers to the mean
Let's quickly visualize this process:
End of explanation
from sklearn.datasets import load_digits
digits = load_digits()
est = KMeans(n_clusters=10)
clusters = est.fit_predict(digits.data)
est.cluster_centers_.shape
Explanation: This algorithm will (often) converge to the optimal cluster centers.
KMeans Caveats
The convergence of this algorithm is not guaranteed; for that reason, scikit-learn by default uses a large number of random initializations and finds the best results.
Also, the number of clusters must be set beforehand... there are other clustering algorithms for which this requirement may be lifted.
Application of KMeans to Digits
For a closer-to-real-world example, let's again take a look at the digits data. Here we'll use KMeans to automatically cluster the data in 64 dimensions, and then look at the cluster centers to see what the algorithm has found.
End of explanation
fig = plt.figure(figsize=(8, 3))
for i in range(10):
ax = fig.add_subplot(2, 5, 1 + i, xticks=[], yticks=[])
ax.imshow(est.cluster_centers_[i].reshape((8, 8)), cmap=plt.cm.binary)
Explanation: We see ten clusters in 64 dimensions. Let's visualize each of these cluster centers to see what they represent:
End of explanation
from scipy.stats import mode
labels = np.zeros_like(clusters)
for i in range(10):
mask = (clusters == i)
labels[mask] = mode(digits.target[mask])[0]
Explanation: We see that even without the labels, KMeans is able to find clusters whose means are recognizable digits (with apologies to the number 8)!
The cluster labels are permuted; let's fix this:
End of explanation
from sklearn.decomposition import PCA
X = PCA(2).fit_transform(digits.data)
kwargs = dict(cmap = plt.cm.get_cmap('rainbow', 10),
edgecolor='none', alpha=0.6)
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
ax[0].scatter(X[:, 0], X[:, 1], c=labels, **kwargs)
ax[0].set_title('learned cluster labels')
ax[1].scatter(X[:, 0], X[:, 1], c=digits.target, **kwargs)
ax[1].set_title('true labels');
Explanation: For good measure, let's use our PCA visualization and look at the true cluster labels and K-means cluster labels:
End of explanation
from sklearn.metrics import accuracy_score
accuracy_score(digits.target, labels)
Explanation: Just for kicks, let's see how accurate our K-Means classifier is with no label information:
End of explanation
from sklearn.metrics import confusion_matrix
print(confusion_matrix(digits.target, labels))
plt.imshow(confusion_matrix(digits.target, labels),
cmap='Blues', interpolation='nearest')
plt.colorbar()
plt.grid(False)
plt.ylabel('true')
plt.xlabel('predicted');
Explanation: 80% – not bad! Let's check-out the confusion matrix for this:
End of explanation
from sklearn.datasets import load_sample_image
china = load_sample_image("china.jpg")
plt.imshow(china)
plt.grid(False);
Explanation: Again, this is an 80% classification accuracy for an entirely unsupervised estimator which knew nothing about the labels.
Example: KMeans for Color Compression
One interesting application of clustering is in color image compression. For example, imagine you have an image with millions of colors. In most images, a large number of the colors will be unused, and conversely a large number of pixels will have similar or identical colors.
Scikit-learn has a number of images that you can play with, accessed through the datasets module. For example:
End of explanation
china.shape
Explanation: The image itself is stored in a 3-dimensional array, of size (height, width, RGB):
End of explanation
X = (china / 255.0).reshape(-1, 3)
print(X.shape)
Explanation: We can envision this image as a cloud of points in a 3-dimensional color space. We'll rescale the colors so they lie between 0 and 1, then reshape the array to be a typical scikit-learn input:
End of explanation
from sklearn.cluster import MiniBatchKMeans
# reduce the size of the image for speed
n_colors = 64
X = (china / 255.0).reshape(-1, 3)
model = MiniBatchKMeans(n_colors)
labels = model.fit_predict(X)
colors = model.cluster_centers_
new_image = colors[labels].reshape(china.shape)
new_image = (255 * new_image).astype(np.uint8)
# create and plot the new image
with plt.style.context('seaborn-white'):
plt.figure()
plt.imshow(china)
plt.title('input: 16 million colors')
plt.figure()
plt.imshow(new_image)
plt.title('{0} colors'.format(n_colors))
Explanation: We now have 273,280 points in 3 dimensions.
Our task is to use KMeans to compress the $256^3$ colors into a smaller number (say, 64 colors). Basically, we want to find $N_{color}$ clusters in the data, and create a new image where the true input color is replaced by the color of the closest cluster.
Here we'll use MiniBatchKMeans, a more sophisticated estimator that performs better for larger datasets:
End of explanation |
13,987 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Tile" data-toc-modified-id="Tile-1"><span class="toc-item-num">1 </span>Tile</a></div><div class="lev2 toc-item"><a href="#Exemplo-unidimensional---replicando-as-colunas" data-toc-modified-id="Exemplo-unidimensional---replicando-as-colunas-11"><span class="toc-item-num">1.1 </span>Exemplo unidimensional - replicando as colunas</a></div><div class="lev2 toc-item"><a href="#Exemplo-unidimensional---replicando-as-linhas" data-toc-modified-id="Exemplo-unidimensional---replicando-as-linhas-12"><span class="toc-item-num">1.2 </span>Exemplo unidimensional - replicando as linhas</a></div><div class="lev2 toc-item"><a href="#Exemplo-bidimensional---replicando-as-colunas" data-toc-modified-id="Exemplo-bidimensional---replicando-as-colunas-13"><span class="toc-item-num">1.3 </span>Exemplo bidimensional - replicando as colunas</a></div><div class="lev2 toc-item"><a href="#Exemplo-bidimensional---replicando-as-linhas" data-toc-modified-id="Exemplo-bidimensional---replicando-as-linhas-14"><span class="toc-item-num">1.4 </span>Exemplo bidimensional - replicando as linhas</a></div><div class="lev2 toc-item"><a href="#Exemplo-bidimensional---replicando-as-linhas-e-colunas-simultaneamente" data-toc-modified-id="Exemplo-bidimensional---replicando-as-linhas-e-colunas-simultaneamente-15"><span class="toc-item-num">1.5 </span>Exemplo bidimensional - replicando as linhas e colunas simultaneamente</a></div><div class="lev1 toc-item"><a href="#Documentação-Oficial-Numpy" data-toc-modified-id="Documentação-Oficial-Numpy-2"><span class="toc-item-num">2 </span>Documentação Oficial Numpy</a></div>
# Tile
Uma função importante da biblioteca numpy é a tile, que gera repetições do array passado com parâmetro. A quantidade de repetições é dada pelo parâmetro reps
## Exemplo unidimensional - replicando as colunas
Step1: Exemplo unidimensional - replicando as linhas
Para modificar as dimensões na quais a replicação será realizada modifica-se o parâmetro reps, passando ao invés de um int, uma tupla com as dimensões que se deseja alterar
Step2: Exemplo bidimensional - replicando as colunas
Step3: Exemplo bidimensional - replicando as linhas
Step4: Exemplo bidimensional - replicando as linhas e colunas simultaneamente | Python Code:
import numpy as np
a = np.array([0, 1, 2])
print('a = \n', a)
print()
print('Resultado da operação np.tile(a,2): \n',np.tile(a,2))
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Tile" data-toc-modified-id="Tile-1"><span class="toc-item-num">1 </span>Tile</a></div><div class="lev2 toc-item"><a href="#Exemplo-unidimensional---replicando-as-colunas" data-toc-modified-id="Exemplo-unidimensional---replicando-as-colunas-11"><span class="toc-item-num">1.1 </span>Exemplo unidimensional - replicando as colunas</a></div><div class="lev2 toc-item"><a href="#Exemplo-unidimensional---replicando-as-linhas" data-toc-modified-id="Exemplo-unidimensional---replicando-as-linhas-12"><span class="toc-item-num">1.2 </span>Exemplo unidimensional - replicando as linhas</a></div><div class="lev2 toc-item"><a href="#Exemplo-bidimensional---replicando-as-colunas" data-toc-modified-id="Exemplo-bidimensional---replicando-as-colunas-13"><span class="toc-item-num">1.3 </span>Exemplo bidimensional - replicando as colunas</a></div><div class="lev2 toc-item"><a href="#Exemplo-bidimensional---replicando-as-linhas" data-toc-modified-id="Exemplo-bidimensional---replicando-as-linhas-14"><span class="toc-item-num">1.4 </span>Exemplo bidimensional - replicando as linhas</a></div><div class="lev2 toc-item"><a href="#Exemplo-bidimensional---replicando-as-linhas-e-colunas-simultaneamente" data-toc-modified-id="Exemplo-bidimensional---replicando-as-linhas-e-colunas-simultaneamente-15"><span class="toc-item-num">1.5 </span>Exemplo bidimensional - replicando as linhas e colunas simultaneamente</a></div><div class="lev1 toc-item"><a href="#Documentação-Oficial-Numpy" data-toc-modified-id="Documentação-Oficial-Numpy-2"><span class="toc-item-num">2 </span>Documentação Oficial Numpy</a></div>
# Tile
Uma função importante da biblioteca numpy é a tile, que gera repetições do array passado com parâmetro. A quantidade de repetições é dada pelo parâmetro reps
## Exemplo unidimensional - replicando as colunas
End of explanation
print('a = \n', a)
print()
print("Resultado da operação np.tile(a,(2,1)):\n" , np.tile(a,(2,1)))
Explanation: Exemplo unidimensional - replicando as linhas
Para modificar as dimensões na quais a replicação será realizada modifica-se o parâmetro reps, passando ao invés de um int, uma tupla com as dimensões que se deseja alterar
End of explanation
a = np.array([[0, 1], [2, 3]])
print('a = \n', a)
print()
print("Resultado da operação np.tile(a,2):\n", np.tile(a,2))
Explanation: Exemplo bidimensional - replicando as colunas
End of explanation
a = np.array([[0, 1], [2, 3]])
print('a = \n', a)
print()
print("Resultado da operação np.tile(a,(3,1)):\n", np.tile(a,(3,1)))
Explanation: Exemplo bidimensional - replicando as linhas
End of explanation
a = np.array([[0, 1], [2, 3]])
print('a = \n', a)
print()
print("Resultado da operação np.tile(a,(2,2)):\n", np.tile(a,(2,2)))
Explanation: Exemplo bidimensional - replicando as linhas e colunas simultaneamente
End of explanation |
13,988 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute DICS beamfomer on evoked data
Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ beamformer from
single-trial activity in a time-frequency window to estimate source time
courses based on evoked data.
References
.. [1] Gross et al. Dynamic imaging of coherent sources
Step1: Read raw data | Python Code:
# Author: Roman Goj <[email protected]>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
from mne.time_frequency import csd_epochs
from mne.beamformer import dics
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
Explanation: Compute DICS beamfomer on evoked data
Compute a Dynamic Imaging of Coherent Sources (DICS) [1]_ beamformer from
single-trial activity in a time-frequency window to estimate source time
courses based on evoked data.
References
.. [1] Gross et al. Dynamic imaging of coherent sources: Studying neural
interactions in the human brain. PNAS (2001) vol. 98 (2) pp. 694-699
End of explanation
raw = mne.io.read_raw_fif(raw_fname)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
# Set picks
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=False,
stim=False, exclude='bads')
# Read epochs
event_id, tmin, tmax = 1, -0.2, 0.5
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, mag=4e-12))
evoked = epochs.average()
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
# Computing the data and noise cross-spectral density matrices
# The time-frequency window was chosen on the basis of spectrograms from
# example time_frequency/plot_time_frequency.py
data_csd = csd_epochs(epochs, mode='multitaper', tmin=0.04, tmax=0.15,
fmin=6, fmax=10)
noise_csd = csd_epochs(epochs, mode='multitaper', tmin=-0.11, tmax=0.0,
fmin=6, fmax=10)
evoked = epochs.average()
# Compute DICS spatial filter and estimate source time courses on evoked data
stc = dics(evoked, forward, noise_csd, data_csd, reg=0.05)
plt.figure()
ts_show = -30 # show the 40 largest responses
plt.plot(1e3 * stc.times,
stc.data[np.argsort(stc.data.max(axis=1))[ts_show:]].T)
plt.xlabel('Time (ms)')
plt.ylabel('DICS value')
plt.title('DICS time course of the 30 largest sources.')
plt.show()
# Plot brain in 3D with PySurfer if available
brain = stc.plot(hemi='rh', subjects_dir=subjects_dir,
initial_time=0.1, time_unit='s')
brain.show_view('lateral')
# Uncomment to save image
# brain.save_image('DICS_map.png')
Explanation: Read raw data
End of explanation |
13,989 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Handling Select2 Controls in Selenium WebDriver
Select2 is a jQuery based replacement for select boxes. This article will demonstrate how Selenium webdriver can handle Select2 by manipulating the first such selection box in the Examples page of Select2.
Creating an instance of Selenium webdriver equipped with Firefox Extensions
Firebug and FirePath are very helpful Firefox extensions that I want to use in this demonstration, so I will make Selenium launch a Firefox browser equipped with these extensions.
Step1: Note that in order for the extensions to be installed in the browser, you need to either specify an extension enabled Firefox profile to Selenium or you specify the location and name of Firefox extensions you want to install. In the above example, I have Firebug and FirePath files stored in 'tools\firefox' folder so I can just specify the location and filenames of the extensions.
Navigate to Select2 Examples page
Step2: Identify the locator for the Selection Box
Right click on the first Select2 box and select 'Inspect Element with Firebug'
Firebug will then display and highlight the HTML source of the Selection Box as well as highlight the control itself if you hover your mouse to the HTML source.
We now have the task of figuring out what locator we can use to locate this Selection Box. The Selection Box is a 'span' element with an id="select2-jnw9-container", we can surely make use of this id attribute. However, it appears that this id is randomly generated so I made a slight modification to make sure my locator will still work even if the page is refreshed.
Verify the adopted locator works
In the Firebug window, click on 'FirePath' tab. Click on the dropdown before the input box and select 'CSS
Step3: If the Selection Dropdown appears upon executing the above command, then we are on the right track. You can run the above command several times to confirm the closing and opening of the selection dropdown.
Identify the locator for the Selection Dropdown
We now need to identify the locator for the Selection Dropdown. We do this by clicking back on the 'HTML' tab in the Firebug window and observing that when you manually click on the Selection Box another 'span' element is dynamically being added at the buttom of the HTML source.
We can use previous technique of locating the Selection Box above to arrive to a conclusion that the locator for Selection Dropdown could be 'css=span.select2-dropdown > span > ul'. Note that in this case we specifically located until the 'ul' tag element. This is because the options for Select2 are not 'option' tag elements, instead they are 'li' elements of a 'ul' tag.
Verify that both Selection Box and Dropdown works
After all this hardwork of figuring out the best locators for Selection Box and Selection Dropdown, we then test it to see if we can now properly handle Select2. Marigoso offers two syntax for performing the same action.
select_text
We can use the usual select_text function by just appending the Select Dropdown locator at the end.
Step4: select2
We can also use the select2 function of Marigoso by swapping the order of the Selection Dropdown locator and the value of the text you want to select.
Step5: Final Solution
Finally, here again is the summary of the necessary commands used in this demonstration. | Python Code:
import os
from marigoso import Test
request = {
'firefox': {
'capabilities': {
'marionette': False,
},
}
}
Explanation: Handling Select2 Controls in Selenium WebDriver
Select2 is a jQuery based replacement for select boxes. This article will demonstrate how Selenium webdriver can handle Select2 by manipulating the first such selection box in the Examples page of Select2.
Creating an instance of Selenium webdriver equipped with Firefox Extensions
Firebug and FirePath are very helpful Firefox extensions that I want to use in this demonstration, so I will make Selenium launch a Firefox browser equipped with these extensions.
End of explanation
browser.get_url('https://select2.github.io/')
browser.press("Examples")
Explanation: Note that in order for the extensions to be installed in the browser, you need to either specify an extension enabled Firefox profile to Selenium or you specify the location and name of Firefox extensions you want to install. In the above example, I have Firebug and FirePath files stored in 'tools\firefox' folder so I can just specify the location and filenames of the extensions.
Navigate to Select2 Examples page
End of explanation
browser.press("css=[id^='select2']" )
Explanation: Identify the locator for the Selection Box
Right click on the first Select2 box and select 'Inspect Element with Firebug'
Firebug will then display and highlight the HTML source of the Selection Box as well as highlight the control itself if you hover your mouse to the HTML source.
We now have the task of figuring out what locator we can use to locate this Selection Box. The Selection Box is a 'span' element with an id="select2-jnw9-container", we can surely make use of this id attribute. However, it appears that this id is randomly generated so I made a slight modification to make sure my locator will still work even if the page is refreshed.
Verify the adopted locator works
In the Firebug window, click on 'FirePath' tab. Click on the dropdown before the input box and select 'CSS:'. Then enter "[id^='select2']" in the input box and press Enter key.
Firebug will now display the same thing as before, but notice now that at the lower left part of Firebug window it says '17 matching nodes'. This means we have 17 such Selection Box that can be located using my chosen selector. However, this time we are only interested on the first Selection Box, so I think my chosen selector is still useful.
The ultimate way to verify that the locator works is to feed it to Selenium and run it. So we execute the following command.
End of explanation
browser.select_text("css=*[id^='select2']", "Nevada", 'css=span.select2-dropdown > span > ul')
Explanation: If the Selection Dropdown appears upon executing the above command, then we are on the right track. You can run the above command several times to confirm the closing and opening of the selection dropdown.
Identify the locator for the Selection Dropdown
We now need to identify the locator for the Selection Dropdown. We do this by clicking back on the 'HTML' tab in the Firebug window and observing that when you manually click on the Selection Box another 'span' element is dynamically being added at the buttom of the HTML source.
We can use previous technique of locating the Selection Box above to arrive to a conclusion that the locator for Selection Dropdown could be 'css=span.select2-dropdown > span > ul'. Note that in this case we specifically located until the 'ul' tag element. This is because the options for Select2 are not 'option' tag elements, instead they are 'li' elements of a 'ul' tag.
Verify that both Selection Box and Dropdown works
After all this hardwork of figuring out the best locators for Selection Box and Selection Dropdown, we then test it to see if we can now properly handle Select2. Marigoso offers two syntax for performing the same action.
select_text
We can use the usual select_text function by just appending the Select Dropdown locator at the end.
End of explanation
browser.select2("css=*[id^='select2']", 'css=span.select2-dropdown > span > ul', "Hawaii")
Explanation: select2
We can also use the select2 function of Marigoso by swapping the order of the Selection Dropdown locator and the value of the text you want to select.
End of explanation
import os
from marigoso import Test
request = {
'firefox': {
'extensions_path': os.path.join(os.getcwd(), 'tools', 'firefox'),
'extensions': ['[email protected]', '[email protected]'],
}
}
browser = Test(request).launch_browser('Firefox')
browser.get_url('https://select2.github.io/')
browser.press("Examples")
browser.select_text("css=*[id^='select2']", "Nevada", 'css=span.select2-dropdown > span > ul')
browser.quit()
Explanation: Final Solution
Finally, here again is the summary of the necessary commands used in this demonstration.
End of explanation |
13,990 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Aod Plus Ccn
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 13.3. External Mixture
Is Required
Step59: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step60: 14.2. Shortwave Bands
Is Required
Step61: 14.3. Longwave Bands
Is Required
Step62: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step63: 15.2. Twomey
Is Required
Step64: 15.3. Twomey Minimum Ccn
Is Required
Step65: 15.4. Drizzle
Is Required
Step66: 15.5. Cloud Lifetime
Is Required
Step67: 15.6. Longwave Bands
Is Required
Step68: 16. Model
Aerosol model
16.1. Overview
Is Required
Step69: 16.2. Processes
Is Required
Step70: 16.3. Coupling
Is Required
Step71: 16.4. Gas Phase Precursors
Is Required
Step72: 16.5. Scheme Type
Is Required
Step73: 16.6. Bulk Scheme Species
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-am4', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-AM4
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 70 (38 required)
Model descriptions: Model description details
Initialized From: CMIP5:GFDL-CM3
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
DOC.set_value("whole atmosphere")
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("Bulk aerosol model")
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
DOC.set_value("Other: 3d mass/volume mixing ratio for aerosols")
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
DOC.set_value(16)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
DOC.set_value(True)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
DOC.set_value("Other: uses atmosphericchemistry time stepping")
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("N/A")
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
DOC.set_value("N/A")
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_aod_plus_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Aod Plus Ccn
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.external_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.3. External Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact aerosol external mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
DOC.set_value("Advection (horizontal)")
DOC.set_value("Advection (vertical)")
DOC.set_value("Ageing")
DOC.set_value("Dry deposition")
DOC.set_value("Heterogeneous chemistry")
DOC.set_value("Oxidation (gas phase)")
DOC.set_value("Oxidation (in cloud)")
DOC.set_value("Sedimentation")
DOC.set_value("Wet deposition (impaction scavenging)")
DOC.set_value("Wet deposition (nucleation scavenging)")
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
DOC.set_value("Clouds")
DOC.set_value("Other: heterogeneouschemistry")
DOC.set_value("Other: landsurface")
DOC.set_value("Radiation")
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
DOC.set_value("DMS")
DOC.set_value("SO2")
DOC.set_value("Terpene")
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
DOC.set_value("Bin")
DOC.set_value("Bulk")
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
DOC.set_value("Organic")
DOC.set_value("Other: bc (black carbon / soot)")
DOC.set_value("POM (particulate organic matter)")
DOC.set_value("SOA (secondary organic aerosols)")
DOC.set_value("Sulphate")
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation |
13,991 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table>
<tr>
<td style="text-align
Step1: Graph Execution
TensorFlow executes your code inside a C++ program and returns the results through the TensorFlow API. Since we are using Python, we will be using TensorFlow Python API which is the most documented and most used API of TensorFlow.
Since we are using graph execution, there are two ways to create a session
Step2: Generating new Tensors
Since we have an interactive session, let's create a tensor. There are two common ways to create a tensor tf.zeros() and tf.ones(). Each one of them takes a python tuple or an array as the shape of the tensor.
Let's start be creating a rank-0 tensor.
Step3: We create a tensor and assigned it to a local variable named a. When we check the value of a this is what we get.
Step4: Notice there is no value. You need to call eval() method of the tensor to get the actual value. This method takes an optional parameter where you can pass your session. Since we are using interactive session, we don't have ato pass anything.
Step5: You should know that eval() method returns a numpy.float32 (or what ever the type of the tensor is) if the rank of the tensor is 0 and numpy.ndarray if the tensor has rank 1 or higher.
Numpy
Step6: the rank would be the number of dimensions.
Step7: Notice the name inside the TensorFlow execution engine is not a. It is zeros
Step8: If you created another variable using the same operation, it will be named zeros_1
Step9: Now let's create a second tensor of shape (3) which is going to be a rank-1 tensor. This time we will name it b and store it in a local variable named b.
Step10: Notice the name of the variable now is b
Step11: You can also get the value of a tensor by executing the tensor using your session.
Step12: You can also fill the tensor with any other value you want other than 0 and 1 using fill() function.
Step13: Notice that the data type of this tensor is int32 and not float32 because you initialized the tensor with an integer 5 and not 5.0.
Tensor Shape
For multi dimensional tensors, the shape is passed as a tuple or an array. The way this array is arranged is from the outer most dimension to the inner dimensions. So for a tensor that should represents 10 items and each item has three numbers, the shape would be (10,3).
Step14: Note
Step15: Generating Tensors with Random Values
In many cases, you want to generate a new tensor but we want to start with random values stored in the tensor. The way we do that is using one the random generators of TensorFlow.
Normal Distribution
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
Step16: This function returns random values using normal distribution which is also known as Gaussian distribution or informally called a "Bell Curve". To better understand it, let's first look at a graph showing this distribution. In Mathematics, a normal distribution of mean $\mu$ and standard deviation $\sigma$ is denoted $N(\mu, \sigma)$ (More about that in "The Math Behind It").
To do that we will import Matplotlib which is a very common charting library for Python.
Step17: Notice the bill shape of the curve where you get more values around your mean a fewer values as you move away from the mean.
You can also change the mean.
Step18: You can also control how concentrated your random numbers will be around the mean by controlling the standard deviation. Higher standard deviation means less values around the mean and wider distribution.
Step19: One more note on normal distribution, if you created a large tensor with millions or tens of millions of random values, some of these values will fall really far from the mean. With some machine learning algorithms this might create instability. You can avoid that by using truncated_normal() function instead of random_normal(). This will re-sample any values that falls more than 2 standard deviations from the mean.
Step20: Uniform Distribution
The other common distribution is the uniform one. This will generate values with equal probability of falling anywhere between two numbers.
tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
Step21: Generating Tensors with Sequence Values
You can generate tensors with sequence values using the following function
Step22: Which is equivalent to
Step23: Notice that the output of range() function will never reach the limit parameter.
You can also control the delta which is the spacing between the tensor's elements.
Step24: Reshaping Tensors
You can reshape tensors using this function
Step25: Tensor Arithmetics
You can use standard python arithmetics on tensors and get results in a new tensor.
Addition
Step26: Element-wise Operations
Step27: Matrix Operations
TensorFlow support a variety of matrix operations.
Identity Matrix
An identity matrix is a matrix where all the values are zeros exception diagonally where it has values of 1. This is an example of an identity matrix of shape (3,3).
$$\begin{bmatrix}
{1} & {0} & {0} \
{0} & {1} & {0} \
{0} & {0} & {1} \
\end{bmatrix}$$
To do that in TensorFlow use the tf.eye() function.
Step28: Transpose
Transpose is another operation that is commonly used in matrix calculations. Transpose converts rows to columns. Assume you have a matrix $\mathbf{A}$, a transpose operation over this matrix produces $\mathbf{A^T}$ pronounced $\mathbf{A}$ transpose.
Step29: Matrix Multiplication
One of the most common operations for matrices in deep learning in matrix multiplication. Matrix multiplication is not an element-wise operation. The exact math will be discussed in the last section of this tutorial "The Math Behind It". But for now to give you the basics you should know the following
Step30: The Math Behind It
Standard Deviation $\sigma$ or $s$
Standard deviation is the measure of how elements in a set vary from the mean. So a sample with most data points close to the mean has low standard deviation and it gets higher as the data points start moving away from the mean. In statistics, standard deviation is denoted as the small letter sigma $\sigma$ or $s$.
The formula to calculate standard deviation for a whole population is
Step31: Now that we know the mean, we can go back to the original equation and calculate the standard deviation.
So looking at the equation one more
Step32: Note that standard deviation is a build in function in NumPy, TensorFlow and many other languages and libraries.
Step33: In TensorFlow
Step34: Variance $\sigma^2$, $s^2$ or $Var(X)$
It is just the square of the standard deviation.
Step35: Matrix Multiplication
Arguably, the most common matrix operation you will perform in deep learning is multiplying matrices. Understanding this operation is a good start to understanding the math behind neural networks. This operation is also known as "dot product".
Assume we have two matrices $\mathbf{A}{(2,3)}$ and $\mathbf{B}{(3,2)}$. The dot product of these two matrices $\mathbf{A}{(2,3)} . \mathbf{B}{(3,2)}$ is calculated as follows
Step36: Remember from before, that mathematical shape of a matrix is opposite of TensorFlow shape of a tensor. So instead of rewriting our arrays, we will just use transpose you make rows into columns and columns into rows.
Step37: Luckily there is also a easier way to to that. | Python Code:
import tensorflow as tf
import sys
print("Python Version:",sys.version.split(" ")[0])
print("TensorFlow Version:",tf.VERSION)
Explanation: <table>
<tr>
<td style="text-align:left;"><div style="font-family: monospace; font-size: 2em; display: inline-block; width:60%">2. Tensors</div><img src="images/roshan.png" style="width:30%; display: inline; text-align: left; float:right;"></td>
<td></td>
</tr>
</table>
Before we go into tensors and programming in TensorFlow, let's take a look at how does it work.
TensorFlow Basics
TensorFlow has a few concepts that you should be familiar with. TensorFlow executes your code inside an execution engine that you communicate with using an API. In TensorFlow your data is called a Tensor that you can apply operations (OPs) to. Your code is converted into a Graph that is executed in an execution engine called a Session. So your python code is just a representation of your graph that can be executed in a session.
You can see how your data (or tensors) flow from one operation to the next, hence the name TensorFlow.
Since version 1.5 and as of version 1.8 there are two methods to execute code in TensorFlow:
Graph Execution
Eager Execution
The main difference is graph execution is a type of declarative programming and eager execution is a type of imperative programming. In plain English, the difference is graph execution defines your code as a graph and executes in a session. Your objects in python are not the actual objects inside the session, they are only a reference to them. Eager execution executes your code as you run it giving you a better control of your program while it is running so you are not stuck with a predefined graph.
So why do we even bother with graph execution?
Performance is a big issue in machine learning and a small difference in execution time can save you in a long project weeks of your time and 1000s of hours of GPU time. Support is another issue for now where some features of TensorFlow do not work in Eager Execution like high level estimators.
We will focus in the following tutorials only on graph execution and we will cover eager execution later in this series.
Tensors
Tensors represent your data. It a scalar variable or an array of any dimension. Tensors are the main object to store and pass data between operations, the input and output of all operations are always tensors.
Tensor Shape and Rank
Tensors have a rank and a shape so for scalar values, we use rank-0 tensors of a shape () which is an empty shape
Assuming we need a variable or a constant number to use in our software, we can represent it as a tensor of rank-0.
A rank-1 tensor can be though of as an vector or a one dimensional array. creating a rank-1 tensor with shape (3) will create a tensor that can hold three values in a one-dimensional array.
A rank-2 is a matrix or a two dimensional array. This can be used to hold two dimensional data like a black and white image. The shape of the tensor can match the shape of the image so to hold a 256x256 pixel image in a tensor, you can create a rank-2 tensor of shape (256,256).
A rank-3 tensor is a three dimensional array. It can be used to hold three dimensional data like a color image represented in (RGB). To create a tensor to hold an color image of size 256x256, you can create a rank-3 tensor of shape (256,256,3).
TensorFlow allows tensors in higher dimensions but you will very rarely see tensors of a rank exceeding 5 (batch size, width, height, RGB, frames) for representing a batch of video clips.
Importing Tensor Flow
Let's import TensorFlow and start working with some tensors.
End of explanation
sess = tf.InteractiveSession()
Explanation: Graph Execution
TensorFlow executes your code inside a C++ program and returns the results through the TensorFlow API. Since we are using Python, we will be using TensorFlow Python API which is the most documented and most used API of TensorFlow.
Since we are using graph execution, there are two ways to create a session:
Session
Interactive Session
Sessions and interactive sessions, use your code to build a "Graph" which is a representation of your code inside TensorFlow's execution engine. The main difference between them is, an interactive session makes itself the default session. Since we are using only one session for our code, we will use that.
For now let's start an interactive session and start flowing some tensors!
End of explanation
a = tf.zeros(())
Explanation: Generating new Tensors
Since we have an interactive session, let's create a tensor. There are two common ways to create a tensor tf.zeros() and tf.ones(). Each one of them takes a python tuple or an array as the shape of the tensor.
Let's start be creating a rank-0 tensor.
End of explanation
a
Explanation: We create a tensor and assigned it to a local variable named a. When we check the value of a this is what we get.
End of explanation
a.eval()
Explanation: Notice there is no value. You need to call eval() method of the tensor to get the actual value. This method takes an optional parameter where you can pass your session. Since we are using interactive session, we don't have ato pass anything.
End of explanation
a.shape
Explanation: You should know that eval() method returns a numpy.float32 (or what ever the type of the tensor is) if the rank of the tensor is 0 and numpy.ndarray if the tensor has rank 1 or higher.
Numpy: is a multi-dimensional array library for python that runs the operations in a C program and interfaces back with python to ensure fast array operations.
We can also check the rank and shape of the tensor.
End of explanation
a.shape.ndims
Explanation: the rank would be the number of dimensions.
End of explanation
a.name
Explanation: Notice the name inside the TensorFlow execution engine is not a. It is zeros:0 which is an auto generated name for the variable. The auto generated name is the name of the operation that generated the tensor and then an the index of the of the tensor in the output of the operation.
End of explanation
tf.zeros(())
Explanation: If you created another variable using the same operation, it will be named zeros_1:0.
End of explanation
b = tf.zeros((3), name="b")
b
Explanation: Now let's create a second tensor of shape (3) which is going to be a rank-1 tensor. This time we will name it b and store it in a local variable named b.
End of explanation
type(b.eval())
Explanation: Notice the name of the variable now is b:0 which is the name that we gave it and the index of 0 of the return of the operation. We can also get the value in the same way using eval() method.
End of explanation
sess.run(b)
Explanation: You can also get the value of a tensor by executing the tensor using your session.
End of explanation
tf.fill((2,2), 5).eval()
Explanation: You can also fill the tensor with any other value you want other than 0 and 1 using fill() function.
End of explanation
tf.zeros((10,3)).eval()
Explanation: Notice that the data type of this tensor is int32 and not float32 because you initialized the tensor with an integer 5 and not 5.0.
Tensor Shape
For multi dimensional tensors, the shape is passed as a tuple or an array. The way this array is arranged is from the outer most dimension to the inner dimensions. So for a tensor that should represents 10 items and each item has three numbers, the shape would be (10,3).
End of explanation
tf.zeros((2,3,4)).eval()
Explanation: Note: This is the opposite of the how matrix shape notation is written in mathematics. A matrix of shape $A_{(3,10)}$ can be represented in TensorFlow as (10,3). The reason for that is in mathematics the shape is $(Columns,Rows)$ and TensorFlow uses (Outer,Inner) which translates in 2-D tensor as (Rows,Columns).
For higher dimensions the same rules applies. Let say we have 2 items and each item has 3 parts and each part consists of 4 numbers the shape would be (2,3,4)
End of explanation
arr1 = tf.random_normal((1000,))
arr1
Explanation: Generating Tensors with Random Values
In many cases, you want to generate a new tensor but we want to start with random values stored in the tensor. The way we do that is using one the random generators of TensorFlow.
Normal Distribution
tf.random_normal(shape, mean=0.0, stddev=1.0, dtype=tf.float32, seed=None, name=None)
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.hist(arr1.eval(), bins=15);
Explanation: This function returns random values using normal distribution which is also known as Gaussian distribution or informally called a "Bell Curve". To better understand it, let's first look at a graph showing this distribution. In Mathematics, a normal distribution of mean $\mu$ and standard deviation $\sigma$ is denoted $N(\mu, \sigma)$ (More about that in "The Math Behind It").
To do that we will import Matplotlib which is a very common charting library for Python.
End of explanation
arr1 = tf.random_normal((1000,), mean=20.0)
plt.hist(arr1.eval(), bins=15);
Explanation: Notice the bill shape of the curve where you get more values around your mean a fewer values as you move away from the mean.
You can also change the mean.
End of explanation
arr1 = tf.random_normal((1000,), stddev=2, name="arr1")
arr2 = tf.random_normal((1000,), stddev=1, name="arr2")
plt.hist([arr1.eval(), arr2.eval()], bins=15);
Explanation: You can also control how concentrated your random numbers will be around the mean by controlling the standard deviation. Higher standard deviation means less values around the mean and wider distribution.
End of explanation
plt.hist(tf.truncated_normal((1000,)).eval(), bins=15);
Explanation: One more note on normal distribution, if you created a large tensor with millions or tens of millions of random values, some of these values will fall really far from the mean. With some machine learning algorithms this might create instability. You can avoid that by using truncated_normal() function instead of random_normal(). This will re-sample any values that falls more than 2 standard deviations from the mean.
End of explanation
arr1 = tf.random_uniform((1000,))
plt.hist(arr1.eval(), bins=15);
Explanation: Uniform Distribution
The other common distribution is the uniform one. This will generate values with equal probability of falling anywhere between two numbers.
tf.random_uniform(shape, minval=0, maxval=None, dtype=tf.float32, seed=None, name=None)
End of explanation
tf.range(5).eval()
Explanation: Generating Tensors with Sequence Values
You can generate tensors with sequence values using the following function:
tf.range(start, limit=None, delta=1, dtype=None, name='range')
End of explanation
tf.range(0, 5).eval()
Explanation: Which is equivalent to:
End of explanation
tf.range(0, 5, 2).eval()
Explanation: Notice that the output of range() function will never reach the limit parameter.
You can also control the delta which is the spacing between the tensor's elements.
End of explanation
a = tf.range(6)
tf.reshape(a, (3,2)).eval()
Explanation: Reshaping Tensors
You can reshape tensors using this function:
tf.reshape(tensor, shape, name=None)
End of explanation
a = tf.ones((2,2))
b = tf.fill((2,2), 10.0) # Notice we used 10.0 and not 10 to ensure the data type will be float32
c = a + b
c.eval()
Explanation: Tensor Arithmetics
You can use standard python arithmetics on tensors and get results in a new tensor.
Addition
End of explanation
d = c * 2.0
d.eval()
(d + 3).eval()
Explanation: Element-wise Operations
End of explanation
i = tf.eye(3,3)
i.eval()
Explanation: Matrix Operations
TensorFlow support a variety of matrix operations.
Identity Matrix
An identity matrix is a matrix where all the values are zeros exception diagonally where it has values of 1. This is an example of an identity matrix of shape (3,3).
$$\begin{bmatrix}
{1} & {0} & {0} \
{0} & {1} & {0} \
{0} & {0} & {1} \
\end{bmatrix}$$
To do that in TensorFlow use the tf.eye() function.
End of explanation
a = tf.range(1,9)
i = tf.reshape(a, (2,4))
i.eval()
it = tf.matrix_transpose(i)
it.eval()
Explanation: Transpose
Transpose is another operation that is commonly used in matrix calculations. Transpose converts rows to columns. Assume you have a matrix $\mathbf{A}$, a transpose operation over this matrix produces $\mathbf{A^T}$ pronounced $\mathbf{A}$ transpose.
End of explanation
a = tf.ones((2,3))
b = tf.ones((3,4))
c = tf.matmul(a ,b)
print("c has the shape of:", c.shape)
c.eval()
Explanation: Matrix Multiplication
One of the most common operations for matrices in deep learning in matrix multiplication. Matrix multiplication is not an element-wise operation. The exact math will be discussed in the last section of this tutorial "The Math Behind It". But for now to give you the basics you should know the following:
Assume we have two matrices $\mathbf{A}$ and $\mathbf{B}$. The shape of $\mathbf{A}$ is (m,n) and the shape of $\mathbf{B}$ is (o,p) we can write these two matrices with their shape as $\mathbf{A}{(m,n)}$ and $\mathbf{B}{(o,p)}$. Multiplying these two matrices produces a matrix of the shape (m,p) IF $n=o$ like this:
$\mathbf{A}{(m,n)} . \mathbf{B}{(o,p)}=\mathbf{C}_{(m,p)} \leftarrow n=o$
Notice the inner shape of these two matrices is the same and the output matrix has the shape of the outer shape of these two matrices. If the inner shape of the matrices does not match the product doesn’t exist.
We can use tf.matmul() function to do that.
End of explanation
g = [88, 94, 71, 97, 84, 82, 80, 98, 91, 93]
total = sum(g)
count = len(g)
mean = total/count
mean
Explanation: The Math Behind It
Standard Deviation $\sigma$ or $s$
Standard deviation is the measure of how elements in a set vary from the mean. So a sample with most data points close to the mean has low standard deviation and it gets higher as the data points start moving away from the mean. In statistics, standard deviation is denoted as the small letter sigma $\sigma$ or $s$.
The formula to calculate standard deviation for a whole population is:
$$\sigma={\sqrt {\frac {\sum_{i=1}^N(x_{i}-{\overline {x}})^{2}}{N}}}$$
Let's break it down and see how to calculate standard deviation.
Assume we have exam grades of 10 students and the grades are so follow:
| ID | Grade |
| ----- |:------:|
| 1 | 88 |
| 2 | 94 |
| 3 | 71 |
| 4 | 97 |
| 5 | 84 |
| 6 | 82 |
| 7 | 80 |
| 8 | 98 |
| 9 | 91 |
| 10 | 93 |
First thing we need to do is calculate the mean. The mean is denoted as $\overline {x}$ (pronouned "x bar"). To calculate the mean (or average) get the sum all numbers and divide it by their count. It is also commonly denoted as a small letter mu $\mu$. Assume you have $N$ values, this will be the formula to calculate the mean:
$$\overline {x} = \frac{x_1 + x_2 + ... + x_N}{N} = \frac{\sum_{i=1}^N x_i}{N}$$
So let's calculate that.
End of explanation
from math import sqrt
σ = sqrt(sum([(x-mean)**2 for x in g]) / count)
σ
Explanation: Now that we know the mean, we can go back to the original equation and calculate the standard deviation.
So looking at the equation one more:
$$\sigma={\sqrt {\frac {\sum_{i=1}^N(x_{i}-{\overline {x}})^{2}}{N}}}$$
First we need to get each element in our grades $x_i$ and subtract it from the mean $\overline {x}$ then square it and take the sum of that.
python
a = [(x-mean)**2 for x in g]
b = sum(a)
Divide that by the number of elements $N$ then take the square root
python
variance = b / count
σ = sqrt(variance)
We can write the whole thing in one like this:
End of explanation
import numpy as np
np.std(g)
Explanation: Note that standard deviation is a build in function in NumPy, TensorFlow and many other languages and libraries.
End of explanation
t = tf.constant(g, dtype=tf.float64)
mean_t, var_t = tf.nn.moments(t, axes=0)
sqrt(var_t.eval())
Explanation: In TensorFlow
End of explanation
variance = sum([(x-mean)**2 for x in g]) / count
variance
Explanation: Variance $\sigma^2$, $s^2$ or $Var(X)$
It is just the square of the standard deviation.
End of explanation
a = [[1,0],
[3,2],
[1,4],
]
b = [[2,1,2],
[1,2,3],
]
Explanation: Matrix Multiplication
Arguably, the most common matrix operation you will perform in deep learning is multiplying matrices. Understanding this operation is a good start to understanding the math behind neural networks. This operation is also known as "dot product".
Assume we have two matrices $\mathbf{A}{(2,3)}$ and $\mathbf{B}{(3,2)}$. The dot product of these two matrices $\mathbf{A}{(2,3)} . \mathbf{B}{(3,2)}$ is calculated as follows:
$\mathbf{A}_{(2,3)} = \begin{bmatrix}
{1} & {0} \
{3} & {2} \
{1} & {4} \
\end{bmatrix}$
$\mathbf{B}_{(3,2)} = \begin{bmatrix}
{2} & {1} & {2} \
{1} & {2} & {3} \
\end{bmatrix}$
$\mathbf{C}{(2,2)} = \mathbf{A}{(2,3)} . \mathbf{B}_{(3,2)}$
$\mathbf{C}_{(2,2)} = \begin{bmatrix}
{2\times1 + 3\times1 + 1\times2} & {0\times2 + 2\times1 + 4\times2} \
{1\times1 + 3\times2 + 1\times3} & {0\times1 + 2\times2 + 4\times3} \
\end{bmatrix} = \begin{bmatrix}
{2 + 3 + 2} & {0 + 2 + 8} \
{1 + 6 + 3} & {0 + 4 + 12} \
\end{bmatrix}= \begin{bmatrix}
{7} & {10} \
{10} & {16} \
\end{bmatrix}$
This is an animation to shows how that is done step by step.
Now let's confirm it with TensorFlow.
End of explanation
a = tf.constant(a)
b = tf.constant(b)
c = tf.matmul(tf.matrix_transpose(a), tf.matrix_transpose(b))
c.eval()
Explanation: Remember from before, that mathematical shape of a matrix is opposite of TensorFlow shape of a tensor. So instead of rewriting our arrays, we will just use transpose you make rows into columns and columns into rows.
End of explanation
c = tf.matmul(a,b, transpose_a=True, transpose_b=True)
c.eval()
Explanation: Luckily there is also a easier way to to that.
End of explanation |
13,992 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Predicting sentiment from product reviews
<hr>
Import GraphLab Create
Step1: Read some product review data
Loading reviews for a set of baby products.
Step2: Exploring the data
Data includes the product name, the review text and the rating of the review.
Step3: Build the word count vector for each review
Step4: Examining the reviews for most-sold product
Step5: Build a sentiment classifier
Step6: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
Step7: Let's train the sentiment classifier
Step8: Evaluate the sentiment model
Step9: Applying the learned model to understand sentiment for Giraffe
Step10: Sort the reviews based on the predicted sentiment and explore
Step11: Most positive reviews for the giraffe
Step12: Show most negative reviews for giraffe | Python Code:
import graphlab
Explanation: Predicting sentiment from product reviews
<hr>
Import GraphLab Create
End of explanation
products = graphlab.SFrame('amazon_baby.gl/')
Explanation: Read some product review data
Loading reviews for a set of baby products.
End of explanation
products.head()
Explanation: Exploring the data
Data includes the product name, the review text and the rating of the review.
End of explanation
products['word_count'] = graphlab.text_analytics.count_words(products['review'])
products.head()
graphlab.canvas.set_target('ipynb')
products['name'].show()
Explanation: Build the word count vector for each review
End of explanation
giraffe_reviews = products[products['name'] == 'Vulli Sophie the Giraffe Teether']
len(giraffe_reviews)
giraffe_reviews['rating'].show(view='Categorical')
Explanation: Examining the reviews for most-sold product: 'Vulli Sophie the Giraffe Teether'
End of explanation
products['rating'].show(view='Categorical')
Explanation: Build a sentiment classifier
End of explanation
#ignore all 3* reviews
products = products[products['rating'] != 3]
#positive sentiment = 4* or 5* reviews
products['sentiment'] = products['rating'] >=4
products.head()
Explanation: Define what's a positive and a negative sentiment
We will ignore all reviews with rating = 3, since they tend to have a neutral sentiment. Reviews with a rating of 4 or higher will be considered positive, while the ones with rating of 2 or lower will have a negative sentiment.
End of explanation
train_data,test_data = products.random_split(.8, seed=0)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target='sentiment',
features=['word_count'],
validation_set=test_data)
Explanation: Let's train the sentiment classifier
End of explanation
sentiment_model.evaluate(test_data, metric='roc_curve')
sentiment_model.show(view='Evaluation')
Explanation: Evaluate the sentiment model
End of explanation
giraffe_reviews['predicted_sentiment'] = sentiment_model.predict(giraffe_reviews, output_type='probability')
giraffe_reviews.head()
Explanation: Applying the learned model to understand sentiment for Giraffe
End of explanation
giraffe_reviews = giraffe_reviews.sort('predicted_sentiment', ascending=False)
giraffe_reviews.head()
Explanation: Sort the reviews based on the predicted sentiment and explore
End of explanation
giraffe_reviews[0]['review']
giraffe_reviews[1]['review']
Explanation: Most positive reviews for the giraffe
End of explanation
giraffe_reviews[-1]['review']
giraffe_reviews[-2]['review']
Explanation: Show most negative reviews for giraffe
End of explanation |
13,993 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
Chopstick length
2. What is the dependent variable in the experiment?
The Food.Pinching.Efficiency (title in csv file) or PPPC (according to introduction) which is the measure of food-pinching performance.
3. How is the dependent variable operationally defined?
The number of peanuts picked and placed in a cup. Presumably this is a rate per unit time since the values given in the csv file are not integers.
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.)
Each group has the same gender and similar age (e.g. 31 male junior college students is one group). This means the age and gender were matched. Matching age could eliminate the effects of reduced flexibility, agility or mental focus that might be present in older subjects. Matching gender may be important because smaller hands (among women) may be better suited to smaller chopsticks.
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
Step1: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
Step2: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
Step3: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
For the 31 male college students the best length was 240mm. | Python Code:
import pandas as pd
# pandas is a software library for data manipulation and analysis
# We commonly use shorter nicknames for certain packages. Pandas is often abbreviated to pd.
# hit shift + enter to run this cell or block of code
path = r'/Users/pradau/Dropbox/temp/Downloads/chopstick-effectiveness.csv'
# Change the path to the location where the chopstick-effectiveness.csv file is located on your computer.
# If you get an error when running this block of code, be sure the chopstick-effectiveness.csv is located at the path on your computer.
dataFrame = pd.read_csv(path)
dataFrame
Explanation: Chopsticks!
A few researchers set out to determine the optimal length of chopsticks for children and adults. They came up with a measure of how effective a pair of chopsticks performed, called the "Food Pinching Performance." The "Food Pinching Performance" was determined by counting the number of peanuts picked and placed in a cup (PPPC).
An investigation for determining the optimum length of chopsticks.
Link to Abstract and Paper
the abstract below was adapted from the link
Chopsticks are one of the most simple and popular hand tools ever invented by humans, but have not previously been investigated by ergonomists. Two laboratory studies were conducted in this research, using a randomised complete block design, to evaluate the effects of the length of the chopsticks on the food-serving performance of adults and children. Thirty-one male junior college students and 21 primary school pupils served as subjects for the experiment to test chopsticks lengths of 180, 210, 240, 270, 300, and 330 mm. The results showed that the food-pinching performance was significantly affected by the length of the chopsticks, and that chopsticks of about 240 and 180 mm long were optimal for adults and pupils, respectively. Based on these findings, the researchers suggested that families with children should provide both 240 and 180 mm long chopsticks. In addition, restaurants could provide 210 mm long chopsticks, considering the trade-offs between ergonomics and cost.
For the rest of this project, answer all questions based only on the part of the experiment analyzing the thirty-one adult male college students.
Download the data set for the adults, then answer the following questions based on the abstract and the data set.
If you double click on this cell, you will see the text change so that all of the formatting is removed. This allows you to edit this block of text. This block of text is written using Markdown, which is a way to format text using headers, links, italics, and many other options. You will learn more about Markdown later in the Nanodegree Program. Hit shift + enter or shift + return to show the formatted text.
1. What is the independent variable in the experiment?
Chopstick length
2. What is the dependent variable in the experiment?
The Food.Pinching.Efficiency (title in csv file) or PPPC (according to introduction) which is the measure of food-pinching performance.
3. How is the dependent variable operationally defined?
The number of peanuts picked and placed in a cup. Presumably this is a rate per unit time since the values given in the csv file are not integers.
4. Based on the description of the experiment and the data set, list at least two variables that you know were controlled.
Think about the participants who generated the data and what they have in common. You don't need to guess any variables or read the full paper to determine these variables. (For example, it seems plausible that the material of the chopsticks was held constant, but this is not stated in the abstract or data description.)
Each group has the same gender and similar age (e.g. 31 male junior college students is one group). This means the age and gender were matched. Matching age could eliminate the effects of reduced flexibility, agility or mental focus that might be present in older subjects. Matching gender may be important because smaller hands (among women) may be better suited to smaller chopsticks.
One great advantage of ipython notebooks is that you can document your data analysis using code, add comments to the code, or even add blocks of text using Markdown. These notebooks allow you to collaborate with others and share your work. For now, let's see some code for doing statistics.
End of explanation
dataFrame['Food.Pinching.Efficiency'].mean()
Explanation: Let's do a basic statistical calculation on the data using code! Run the block of code below to calculate the average "Food Pinching Efficiency" for all 31 participants and all chopstick lengths.
End of explanation
meansByChopstickLength = dataFrame.groupby('Chopstick.Length')['Food.Pinching.Efficiency'].mean().reset_index()
meansByChopstickLength
# reset_index() changes Chopstick.Length from an index to column. Instead of the index being the length of the chopsticks, the index is the row numbers 0, 1, 2, 3, 4, 5.
Explanation: This number is helpful, but the number doesn't let us know which of the chopstick lengths performed best for the thirty-one male junior college students. Let's break down the data by chopstick length. The next block of code will generate the average "Food Pinching Effeciency" for each chopstick length. Run the block of code below.
End of explanation
# Causes plots to display within the notebook rather than in a new window
%pylab inline
import matplotlib.pyplot as plt
plt.scatter(x=meansByChopstickLength['Chopstick.Length'], y=meansByChopstickLength['Food.Pinching.Efficiency'])
# title="")
plt.xlabel("Length in mm")
plt.ylabel("Efficiency in PPPC")
plt.title("Average Food Pinching Efficiency by Chopstick Length")
plt.show()
Explanation: 5. Which chopstick length performed the best for the group of thirty-one male junior college students?
For the 31 male college students the best length was 240mm.
End of explanation |
13,994 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Applying PyMC3 to Marketing Conversion data
One good use of Probabilistic Programming is trying to say something about our conversions.
We'll generate some fake data and do a simple 'transaction_observed' model since that's the kind of thing we would try to model with such data sets in the 'real world'.
Step1: We'll generate some example marketing data
Afterwrds we'll generate a simple model with a prior - exponetial (not necessarily a good prior)
Some simple data
We'll try to infer a model for the observed number of transactions.
Afterwards we'll apply Posterior Predictive Checks to this posterior.
Step2: Posterior Predictive Checks
PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.
Step3: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('bmh')
colors = ['#348ABD', '#A60628', '#7A68A6', '#467821', '#D55E00',
'#CC79A7', '#56B4E9', '#009E73', '#F0E442', '#0072B2']
Explanation: Applying PyMC3 to Marketing Conversion data
One good use of Probabilistic Programming is trying to say something about our conversions.
We'll generate some fake data and do a simple 'transaction_observed' model since that's the kind of thing we would try to model with such data sets in the 'real world'.
End of explanation
import pandas as pd
channels = {'A': [2292.04, 9],
'B': [1276.85, 2],
'C': [139.59, 3],
'D': [954.98, 5],
'E': [8000.98, 12],
'F': [2678.04, 6]
}
df = pd.DataFrame(channels)
df.values[1]
import pymc3 as pm
import matplotlib.pyplot as plt
import numpy as np
import sys
import theano.tensor as tt
import seaborn as sns
spend_obs = df.values[0]
transactions_obs = df.values[1]
model = pm.Model()
with pm.Model() as model:
# We'll use Exponential as our prior
c = pm.Exponential('Prior', 1/50.)
e = pm.Deterministic('spends_observations', spend_obs/ c)
# The observed number of transactions is a Poisson with mu set to the expectations
a = pm.Poisson('transactions_observed', mu=e, observed=transactions_obs)
with model:
start = pm.find_MAP()
step = pm.NUTS( scaling=start, gamma=.55)
trace = pm.sample(20000, step, start=start, progressbar=True)
pm.traceplot(trace)
spends = trace.get_values('spends_observations')
Prior = trace.get_values('Prior')
x_lim = 60
burnin = 50000
fig = plt.figure(figsize=(10,6))
fig.add_subplot(211)
_ = plt.hist(spends, range=[0, x_lim], bins=x_lim, histtype='stepfilled')
_ = plt.xlim(1, x_lim)
_ = plt.ylabel('Frequency')
_ = plt.title('Posterior predictive distribution')
plt.tight_layout()
Explanation: We'll generate some example marketing data
Afterwrds we'll generate a simple model with a prior - exponetial (not necessarily a good prior)
Some simple data
We'll try to infer a model for the observed number of transactions.
Afterwards we'll apply Posterior Predictive Checks to this posterior.
End of explanation
# Simply running PPC will use the updated values and do prediction
ppc = pm.sample_ppc(trace, model=model, samples=500)
Explanation: Posterior Predictive Checks
PPCs are a great way to validate a model. The idea is to generate data sets from the model using parameter settings from draws from the posterior.
End of explanation
ax = plt.subplot()
sns.distplot([n.mean() for n in ppc['transactions_observed']],
kde=False, ax=ax)
ax.axvline(transactions_obs.mean())
ax.set(title='Posterior predictive of the mean', xlabel='mean(transaction observed)', ylabel='Frequency');
Explanation: One common way to visualize is to look if the model can reproduce the patterns observed in the real data. For example, how close are the inferred means to the actual sample mean:
End of explanation |
13,995 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
End to End Machine Learning Pipeline for Income Prediction
We use demographic features from the 1996 US census to build an end to end machine learning pipeline. The pipeline is also annotated so it can be run as a Kubeflow Pipeline using the Kale pipeline generator.
The notebook/pipeline stages are
Step1: Train Model
Step2: Note that for your own datasets you can use our utility function gen_category_map to create the category map
Step3: Define shuffled training and test set
Step4: Create feature transformation pipeline
Create feature pre-processor. Needs to have 'fit' and 'transform' methods. Different types of pre-processing can be applied to all or part of the features. In the example below we will standardize ordinal features and apply one-hot-encoding to categorical features.
Ordinal features
Step5: Categorical features
Step6: Combine and fit
Step7: Train Random Forest model
Fit on pre-processed (imputing, OHE, standardizing) data.
Step8: Define predict function
Step9: Train Explainer
Step10: Discretize the ordinal features into quartiles
Step11: Get Explanation
Below, we get an anchor for the prediction of the first observation in the test set. An anchor is a sufficient condition - that is, when the anchor holds, the prediction should be the same as the prediction for this instance.
Step12: We set the precision threshold to 0.95. This means that predictions on observations where the anchor holds will be the same as the prediction on the explained instance at least 95% of the time.
Step13: Train Outlier Detector
Step17: Deploy Seldon Core Model
Step19: Make a prediction request
Step21: Make an explanation request
Step24: Deploy Outier Detector
Step26: Deploy KNative Eventing Event Display
Step28: Test Outlier Detection
Step29: Clean Up Resources | Python Code:
import numpy as np
from sklearn.ensemble import RandomForestClassifier
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from alibi.explainers import AnchorTabular
from alibi.datasets import fetch_adult
from minio import Minio
from minio.error import ResponseError
from joblib import dump, load
import dill
import time
import json
from subprocess import run, Popen, PIPE
from alibi_detect.utils.data import create_outlier_batch
MINIO_HOST="minio-service.kubeflow:9000"
MINIO_ACCESS_KEY="minio"
MINIO_SECRET_KEY="minio123"
MINIO_MODEL_BUCKET="seldon"
INCOME_MODEL_PATH="sklearn/income/model"
EXPLAINER_MODEL_PATH="sklearn/income/explainer"
OUTLIER_MODEL_PATH="sklearn/income/outlier"
DEPLOY_NAMESPACE="admin"
def get_minio():
return Minio(MINIO_HOST,
access_key=MINIO_ACCESS_KEY,
secret_key=MINIO_SECRET_KEY,
secure=False)
minioClient = get_minio()
buckets = minioClient.list_buckets()
for bucket in buckets:
print(bucket.name, bucket.creation_date)
if not minioClient.bucket_exists(MINIO_MODEL_BUCKET):
minioClient.make_bucket(MINIO_MODEL_BUCKET)
Explanation: End to End Machine Learning Pipeline for Income Prediction
We use demographic features from the 1996 US census to build an end to end machine learning pipeline. The pipeline is also annotated so it can be run as a Kubeflow Pipeline using the Kale pipeline generator.
The notebook/pipeline stages are:
Setup
Imports
pipeline-parameters
minio client test
Train a simple sklearn model and push to minio
Prepare an Anchors explainer for model and push to minio
Test Explainer
Train an isolation forest outlier detector for model and push to minio
Deploy a Seldon model and test
Deploy a KfServing model and test
Deploy an outlier detector
End of explanation
adult = fetch_adult()
adult.keys()
data = adult.data
target = adult.target
feature_names = adult.feature_names
category_map = adult.category_map
Explanation: Train Model
End of explanation
from alibi.utils.data import gen_category_map
Explanation: Note that for your own datasets you can use our utility function gen_category_map to create the category map:
End of explanation
np.random.seed(0)
data_perm = np.random.permutation(np.c_[data, target])
data = data_perm[:,:-1]
target = data_perm[:,-1]
idx = 30000
X_train,Y_train = data[:idx,:], target[:idx]
X_test, Y_test = data[idx+1:,:], target[idx+1:]
Explanation: Define shuffled training and test set
End of explanation
ordinal_features = [x for x in range(len(feature_names)) if x not in list(category_map.keys())]
ordinal_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),
('scaler', StandardScaler())])
Explanation: Create feature transformation pipeline
Create feature pre-processor. Needs to have 'fit' and 'transform' methods. Different types of pre-processing can be applied to all or part of the features. In the example below we will standardize ordinal features and apply one-hot-encoding to categorical features.
Ordinal features:
End of explanation
categorical_features = list(category_map.keys())
categorical_transformer = Pipeline(steps=[('imputer', SimpleImputer(strategy='median')),
('onehot', OneHotEncoder(handle_unknown='ignore'))])
Explanation: Categorical features:
End of explanation
preprocessor = ColumnTransformer(transformers=[('num', ordinal_transformer, ordinal_features),
('cat', categorical_transformer, categorical_features)])
Explanation: Combine and fit:
End of explanation
np.random.seed(0)
clf = RandomForestClassifier(n_estimators=50)
model=Pipeline(steps=[("preprocess",preprocessor),("model",clf)])
model.fit(X_train,Y_train)
Explanation: Train Random Forest model
Fit on pre-processed (imputing, OHE, standardizing) data.
End of explanation
def predict_fn(x):
return model.predict(x)
#predict_fn = lambda x: clf.predict(preprocessor.transform(x))
print('Train accuracy: ', accuracy_score(Y_train, predict_fn(X_train)))
print('Test accuracy: ', accuracy_score(Y_test, predict_fn(X_test)))
dump(model, 'model.joblib')
print(get_minio().fput_object(MINIO_MODEL_BUCKET, f"{INCOME_MODEL_PATH}/model.joblib", 'model.joblib'))
Explanation: Define predict function
End of explanation
model.predict(X_train)
explainer = AnchorTabular(predict_fn, feature_names, categorical_names=category_map)
Explanation: Train Explainer
End of explanation
explainer.fit(X_train, disc_perc=[25, 50, 75])
with open("explainer.dill", "wb") as dill_file:
dill.dump(explainer, dill_file)
dill_file.close()
print(get_minio().fput_object(MINIO_MODEL_BUCKET, f"{EXPLAINER_MODEL_PATH}/explainer.dill", 'explainer.dill'))
Explanation: Discretize the ordinal features into quartiles
End of explanation
model.predict(X_train)
idx = 0
class_names = adult.target_names
print('Prediction: ', class_names[explainer.predict_fn(X_test[idx].reshape(1, -1))[0]])
Explanation: Get Explanation
Below, we get an anchor for the prediction of the first observation in the test set. An anchor is a sufficient condition - that is, when the anchor holds, the prediction should be the same as the prediction for this instance.
End of explanation
explanation = explainer.explain(X_test[idx], threshold=0.95)
print('Anchor: %s' % (' AND '.join(explanation['names'])))
print('Precision: %.2f' % explanation['precision'])
print('Coverage: %.2f' % explanation['coverage'])
Explanation: We set the precision threshold to 0.95. This means that predictions on observations where the anchor holds will be the same as the prediction on the explained instance at least 95% of the time.
End of explanation
from alibi_detect.od import IForest
od = IForest(
threshold=0.,
n_estimators=200,
)
od.fit(X_train)
np.random.seed(0)
perc_outlier = 5
threshold_batch = create_outlier_batch(X_train, Y_train, n_samples=1000, perc_outlier=perc_outlier)
X_threshold, y_threshold = threshold_batch.data.astype('float'), threshold_batch.target
#X_threshold = (X_threshold - mean) / stdev
print('{}% outliers'.format(100 * y_threshold.mean()))
od.infer_threshold(X_threshold, threshold_perc=100-perc_outlier)
print('New threshold: {}'.format(od.threshold))
threshold = od.threshold
X_outlier = [[300, 4, 4, 2, 1, 4, 4, 0, 0, 0, 600, 9]]
od.predict(
X_outlier
)
from alibi_detect.utils.saving import save_detector, load_detector
from os import listdir
from os.path import isfile, join
filepath="ifoutlier"
save_detector(od, filepath)
onlyfiles = [f for f in listdir(filepath) if isfile(join(filepath, f))]
for filename in onlyfiles:
print(filename)
print(get_minio().fput_object(MINIO_MODEL_BUCKET, f"{OUTLIER_MODEL_PATH}/{filename}", join(filepath, filename)))
Explanation: Train Outlier Detector
End of explanation
secret = fapiVersion: v1
kind: Secret
metadata:
name: seldon-init-container-secret
namespace: {DEPLOY_NAMESPACE}
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: {MINIO_ACCESS_KEY}
AWS_SECRET_ACCESS_KEY: {MINIO_SECRET_KEY}
AWS_ENDPOINT_URL: http://{MINIO_HOST}
USE_SSL: "false"
with open("secret.yaml","w") as f:
f.write(secret)
run("cat secret.yaml | kubectl apply -f -", shell=True)
sa = fapiVersion: v1
kind: ServiceAccount
metadata:
name: minio-sa
namespace: {DEPLOY_NAMESPACE}
secrets:
- name: seldon-init-container-secret
with open("sa.yaml","w") as f:
f.write(sa)
run("kubectl apply -f sa.yaml", shell=True)
model_yaml=fapiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: income-classifier
namespace: {DEPLOY_NAMESPACE}
spec:
predictors:
- componentSpecs:
graph:
implementation: SKLEARN_SERVER
modelUri: s3://{MINIO_MODEL_BUCKET}/{INCOME_MODEL_PATH}
envSecretRefName: seldon-init-container-secret
name: classifier
logger:
mode: all
explainer:
type: AnchorTabular
modelUri: s3://{MINIO_MODEL_BUCKET}/{EXPLAINER_MODEL_PATH}
envSecretRefName: seldon-init-container-secret
name: default
replicas: 1
with open("model.yaml","w") as f:
f.write(model_yaml)
run("kubectl apply -f model.yaml", shell=True)
run(f"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/$(kubectl get deploy -l seldon-deployment-id=income-classifier -o jsonpath='{{.items[0].metadata.name}}' -n {DEPLOY_NAMESPACE})", shell=True)
run(f"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/$(kubectl get deploy -l seldon-deployment-id=income-classifier -o jsonpath='{{.items[1].metadata.name}}' -n {DEPLOY_NAMESPACE})", shell=True)
Explanation: Deploy Seldon Core Model
End of explanation
payload='{"data": {"ndarray": [[53,4,0,2,8,4,4,0,0,0,60,9]]}}'
cmd=fcurl -d '{payload}' \
http://income-classifier-default.{DEPLOY_NAMESPACE}:8000/api/v1.0/predictions \
-H "Content-Type: application/json"
ret = Popen(cmd, shell=True,stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
print(raw)
Explanation: Make a prediction request
End of explanation
payload='{"data": {"ndarray": [[53,4,0,2,8,4,4,0,0,0,60,9]]}}'
cmd=fcurl -d '{payload}' \
http://income-classifier-default-explainer.{DEPLOY_NAMESPACE}:9000/api/v1.0/explain \
-H "Content-Type: application/json"
ret = Popen(cmd, shell=True,stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
print(raw)
Explanation: Make an explanation request
End of explanation
outlier_yaml=fapiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: income-outlier
namespace: {DEPLOY_NAMESPACE}
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
spec:
containers:
- image: seldonio/alibi-detect-server:1.2.2-dev_alibidetect
imagePullPolicy: IfNotPresent
args:
- --model_name
- adultod
- --http_port
- '8080'
- --protocol
- seldon.http
- --storage_uri
- s3://{MINIO_MODEL_BUCKET}/{OUTLIER_MODEL_PATH}
- --reply_url
- http://default-broker
- --event_type
- io.seldon.serving.inference.outlier
- --event_source
- io.seldon.serving.incomeod
- OutlierDetector
envFrom:
- secretRef:
name: seldon-init-container-secret
with open("outlier.yaml","w") as f:
f.write(outlier_yaml)
run("kubectl apply -f outlier.yaml", shell=True)
trigger_outlier_yaml=fapiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: income-outlier-trigger
namespace: {DEPLOY_NAMESPACE}
spec:
filter:
sourceAndType:
type: io.seldon.serving.inference.request
subscriber:
ref:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
name: income-outlier
with open("outlier_trigger.yaml","w") as f:
f.write(trigger_outlier_yaml)
run("kubectl apply -f outlier_trigger.yaml", shell=True)
run(f"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/$(kubectl get deploy -l serving.knative.dev/service=income-outlier -o jsonpath='{{.items[0].metadata.name}}' -n {DEPLOY_NAMESPACE})", shell=True)
Explanation: Deploy Outier Detector
End of explanation
event_display=fapiVersion: apps/v1
kind: Deployment
metadata:
name: event-display
namespace: {DEPLOY_NAMESPACE}
spec:
replicas: 1
selector:
matchLabels: &labels
app: event-display
template:
metadata:
labels: *labels
spec:
containers:
- name: helloworld-go
# Source code: https://github.com/knative/eventing-contrib/tree/master/cmd/event_display
image: gcr.io/knative-releases/knative.dev/eventing-contrib/cmd/event_display@sha256:f4628e97a836c77ed38bd3b6fd3d0b06de4d5e7db6704772fe674d48b20bd477
---
kind: Service
apiVersion: v1
metadata:
name: event-display
namespace: {DEPLOY_NAMESPACE}
spec:
selector:
app: event-display
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: eventing.knative.dev/v1alpha1
kind: Trigger
metadata:
name: income-outlier-display
namespace: {DEPLOY_NAMESPACE}
spec:
broker: default
filter:
attributes:
type: io.seldon.serving.inference.outlier
subscriber:
ref:
apiVersion: v1
kind: Service
name: event-display
with open("event_display.yaml","w") as f:
f.write(event_display)
run("kubectl apply -f event_display.yaml", shell=True)
run(f"kubectl rollout status -n {DEPLOY_NAMESPACE} deploy/event-display -n {DEPLOY_NAMESPACE}", shell=True)
Explanation: Deploy KNative Eventing Event Display
End of explanation
def predict():
payload='{"data": {"ndarray": [[300, 4, 4, 2, 1, 4, 4, 0, 0, 0, 600, 9]]}}'
cmd=fcurl -d '{payload}' \
http://income-classifier-default.{DEPLOY_NAMESPACE}:8000/api/v1.0/predictions \
-H "Content-Type: application/json"
ret = Popen(cmd, shell=True,stdout=PIPE)
raw = ret.stdout.read().decode("utf-8")
print(raw)
def get_outlier_event_display_logs():
cmd=f"kubectl logs $(kubectl get pod -l app=event-display -o jsonpath='{{.items[0].metadata.name}}' -n {DEPLOY_NAMESPACE}) -n {DEPLOY_NAMESPACE}"
ret = Popen(cmd, shell=True,stdout=PIPE)
res = ret.stdout.read().decode("utf-8").split("\n")
data= []
for i in range(0,len(res)):
if res[i] == 'Data,':
j = json.loads(json.loads(res[i+1]))
if "is_outlier"in j["data"].keys():
data.append(j)
if len(data) > 0:
return data[-1]
else:
return None
j = None
while j is None:
predict()
print("Waiting for outlier logs, sleeping")
time.sleep(2)
j = get_outlier_event_display_logs()
print(j)
print("Outlier",j["data"]["is_outlier"]==[1])
Explanation: Test Outlier Detection
End of explanation
run(f"kubectl delete sdep income-classifier -n {DEPLOY_NAMESPACE}", shell=True)
run(f"kubectl delete ksvc income-outlier -n {DEPLOY_NAMESPACE}", shell=True)
run(f"kubectl delete sa minio-sa -n {DEPLOY_NAMESPACE}", shell=True)
run(f"kubectl delete secret seldon-init-container-secret -n {DEPLOY_NAMESPACE}", shell=True)
run(f"kubectl delete deployment event-display -n {DEPLOY_NAMESPACE}", shell=True)
run(f"kubectl delete svc event-display -n {DEPLOY_NAMESPACE}", shell=True)
Explanation: Clean Up Resources
End of explanation |
13,996 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Active Directory Replication User Backdoor
Metadata
| Metadata | Value |
|
Step1: Download & Process Security Dataset
Step2: Analytic I
Look for any user accessing directory service objects with replication permissions GUIDs
| Data source | Event Provider | Relationship | Event |
|
Step3: Analytic II
Look for any user modifying directory service objects with replication permissions GUIDs
| Data source | Event Provider | Relationship | Event |
| | Python Code:
from openhunt.mordorutils import *
spark = get_spark()
Explanation: Active Directory Replication User Backdoor
Metadata
| Metadata | Value |
|:------------------|:---|
| collaborators | ['@Cyb3rWard0g', '@Cyb3rPandaH'] |
| creation date | 2019/01/01 |
| modification date | 2020/09/20 |
| playbook related | ['WIN-180815210510'] |
Hypothesis
Adversaries with enough permissions (domain admin) might be adding an ACL to the Root Domain for any user to abuse active directory replication services.
Technical Context
Active Directory replication is the process by which the changes that originate on one domain controller are automatically transferred to other domain controllers that store the same data.
Active Directory data takes the form of objects that have properties, or attributes.
Each object is an instance of an object class, and object classes and their respective attributes are defined in the Active Directory schema. The values of the attributes define the object, and a change to a value of an attribute must be transferred from the domain controller on which it occurs to every other domain controller that stores a replica of that object.
Offensive Tradecraft
An adversary with enough permissions (domain admin) can add an ACL to the Root Domain for any user, despite being in no privileged groups, having no malicious sidHistory, and not having local admin rights on the domain controller. This is done to bypass detection rules looking for Domain Admins or the DC machine accounts performing active directory replication requests against a domain controller.
The following access rights / permissions are needed for the replication request according to the domain functional level
| Control access right symbol | Identifying GUID used in ACE |
| :-----------------------------| :------------------------------|
| DS-Replication-Get-Changes | 1131f6aa-9c07-11d1-f79f-00c04fc2dcd2 |
| DS-Replication-Get-Changes-All | 1131f6ad-9c07-11d1-f79f-00c04fc2dcd2 |
| DS-Replication-Get-Changes-In-Filtered-Set | 89e95b76-444d-4c62-991a-0facbeda640c |
Additional reading
* https://github.com/OTRF/ThreatHunter-Playbook/tree/master/docs/library/active_directory_replication.md
Security Datasets
| Metadata | Value |
|:----------|:----------|
| docs | https://securitydatasets.com/notebooks/atomic/windows/defense_evasion/SDWIN-190301125905.html |
| link | https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_powerview_ldap_ntsecuritydescriptor.zip |
Analytics
Initialize Analytics Engine
End of explanation
sd_file = "https://raw.githubusercontent.com/OTRF/Security-Datasets/master/datasets/atomic/windows/defense_evasion/host/empire_powerview_ldap_ntsecuritydescriptor.zip"
registerMordorSQLTable(spark, sd_file, "sdTable")
Explanation: Download & Process Security Dataset
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, ObjectName, OperationType
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 4662
AND ObjectServer = "DS"
AND AccessMask = "0x40000"
AND ObjectType LIKE "%19195a5b_6da0_11d0_afd3_00c04fd930c9%"
'''
)
df.show(10,False)
Explanation: Analytic I
Look for any user accessing directory service objects with replication permissions GUIDs
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Windows active directory | Microsoft-Windows-Security-Auditing | User accessed AD Object | 4662 |
End of explanation
df = spark.sql(
'''
SELECT `@timestamp`, Hostname, SubjectUserName, ObjectDN, AttributeLDAPDisplayName
FROM sdTable
WHERE LOWER(Channel) = "security"
AND EventID = 5136
AND lower(AttributeLDAPDisplayName) = "ntsecuritydescriptor"
AND (AttributeValue LIKE "%1131f6aa_9c07_11d1_f79f_00c04fc2dcd2%"
OR AttributeValue LIKE "%1131f6ad_9c07_11d1_f79f_00c04fc2dcd2%"
OR AttributeValue LIKE "%89e95b76_444d_4c62_991a_0facbeda640c%")
'''
)
df.show(10,False)
Explanation: Analytic II
Look for any user modifying directory service objects with replication permissions GUIDs
| Data source | Event Provider | Relationship | Event |
|:------------|:---------------|--------------|-------|
| Windows active directory | Microsoft-Windows-Security-Auditing | User modified AD Object | 5136 |
End of explanation |
13,997 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a ConvNet PyTorch
In this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
Step2: What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks
Step3: For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
Step4: Example Model
Some assorted tidbits
Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.
We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where
Step5: The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer
one after the other.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
Step6: PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note
Step7: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes)
Step8: GPU!
Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.
If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
Step9: Run the following cell to evaluate the performance of the forward pass running on the CPU
Step10: ... and now the GPU
Step11: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors
Step12: Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model
Step13: Check the accuracy of the model.
Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.
You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.
But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
Step14: Don't forget the validation set!
And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value.
Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.
Things you should try
Step15: Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Tell us here!
Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy. | Python Code:
import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
from torch.utils.data import DataLoader
from torch.utils.data import sampler
import torchvision.datasets as dset
import torchvision.transforms as T
import numpy as np
import timeit
Explanation: Training a ConvNet PyTorch
In this notebook, you'll learn how to use the powerful PyTorch framework to specify a conv net architecture and train it on the CIFAR-10 dataset.
End of explanation
class ChunkSampler(sampler.Sampler):
Samples elements sequentially from some offset.
Arguments:
num_samples: # of desired datapoints
start: offset where we should start selecting from
def __init__(self, num_samples, start = 0):
self.num_samples = num_samples
self.start = start
def __iter__(self):
return iter(range(self.start, self.start + self.num_samples))
def __len__(self):
return self.num_samples
NUM_TRAIN = 49000
NUM_VAL = 1000
cifar10_train = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=T.ToTensor())
loader_train = DataLoader(cifar10_train, batch_size=64, sampler=ChunkSampler(NUM_TRAIN, 0))
cifar10_val = dset.CIFAR10('./cs231n/datasets', train=True, download=True,
transform=T.ToTensor())
loader_val = DataLoader(cifar10_val, batch_size=64, sampler=ChunkSampler(NUM_VAL, NUM_TRAIN))
cifar10_test = dset.CIFAR10('./cs231n/datasets', train=False, download=True,
transform=T.ToTensor())
loader_test = DataLoader(cifar10_test, batch_size=64)
Explanation: What's this PyTorch business?
You've written a lot of code in this assignment to provide a whole host of neural network functionality. Dropout, Batch Norm, and 2D convolutions are some of the workhorses of deep learning in computer vision. You've also worked hard to make your code efficient and vectorized.
For the last part of this assignment, though, we're going to leave behind your beautiful codebase and instead migrate to one of two popular deep learning frameworks: in this instance, PyTorch (or TensorFlow, if you switch over to that notebook).
Why?
Our code will now run on GPUs! Much faster training. When using a framework like PyTorch or TensorFlow you can harness the power of the GPU for your own custom neural network architectures without having to write CUDA code directly (which is beyond the scope of this class).
We want you to be ready to use one of these frameworks for your project so you can experiment more efficiently than if you were writing every feature you want to use by hand.
We want you to stand on the shoulders of giants! TensorFlow and PyTorch are both excellent frameworks that will make your lives a lot easier, and now that you understand their guts, you are free to use them :)
We want you to be exposed to the sort of deep learning code you might run into in academia or industry.
How will I learn PyTorch?
If you've used Torch before, but are new to PyTorch, this tutorial might be of use: http://pytorch.org/tutorials/beginner/former_torchies_tutorial.html
Otherwise, this notebook will walk you through much of what you need to do to train models in Torch. See the end of the notebook for some links to helpful tutorials if you want to learn more or need further clarification on topics that aren't fully explained here.
Load Datasets
We load the CIFAR-10 dataset. This might take a couple minutes the first time you do it, but the files should stay cached after that.
End of explanation
dtype = torch.FloatTensor # the CPU datatype
# Constant to control how frequently we print train loss
print_every = 100
# This is a little utility that we'll use to reset the model
# if we want to re-initialize all our parameters
def reset(m):
if hasattr(m, 'reset_parameters'):
m.reset_parameters()
Explanation: For now, we're going to use a CPU-friendly datatype. Later, we'll switch to a datatype that will move all our computations to the GPU and measure the speedup.
End of explanation
class Flatten(nn.Module):
def forward(self, x):
N, C, H, W = x.size() # read in N, C, H, W
return x.view(N, -1) # "flatten" the C * H * W values into a single vector per image
Explanation: Example Model
Some assorted tidbits
Let's start by looking at a simple model. First, note that PyTorch operates on Tensors, which are n-dimensional arrays functionally analogous to numpy's ndarrays, with the additional feature that they can be used for computations on GPUs.
We'll provide you with a Flatten function, which we explain here. Remember that our image data (and more relevantly, our intermediate feature maps) are initially N x C x H x W, where:
* N is the number of datapoints
* C is the number of channels
* H is the height of the intermediate feature map in pixels
* W is the height of the intermediate feature map in pixels
This is the right way to represent the data when we are doing something like a 2D convolution, that needs spatial understanding of where the intermediate features are relative to each other. When we input data into fully connected affine layers, however, we want each datapoint to be represented by a single vector -- it's no longer useful to segregate the different channels, rows, and columns of the data. So, we use a "Flatten" operation to collapse the C x H x W values per representation into a single long vector. The Flatten function below first reads in the N, C, H, and W values from a given batch of data, and then returns a "view" of that data. "View" is analogous to numpy's "reshape" method: it reshapes x's dimensions to be N x ??, where ?? is allowed to be anything (in this case, it will be C x H x W, but we don't need to specify that explicitly).
End of explanation
# Here's where we define the architecture of the model...
simple_model = nn.Sequential(
nn.Conv2d(3, 32, kernel_size=7, stride=2),
nn.ReLU(inplace=True),
Flatten(), # see above for explanation
nn.Linear(5408, 10), # affine layer
)
# Set the type of all data in this model to be FloatTensor
simple_model.type(dtype)
loss_fn = nn.CrossEntropyLoss().type(dtype)
optimizer = optim.Adam(simple_model.parameters(), lr=1e-2) # lr sets the learning rate of the optimizer
Explanation: The example model itself
The first step to training your own model is defining its architecture.
Here's an example of a convolutional neural network defined in PyTorch -- try to understand what each line is doing, remembering that each layer is composed upon the previous layer. We haven't trained anything yet - that'll come next - for now, we want you to understand how everything gets set up. nn.Sequential is a container which applies each layer
one after the other.
In that example, you see 2D convolutional layers (Conv2d), ReLU activations, and fully-connected layers (Linear). You also see the Cross-Entropy loss function, and the Adam optimizer being used.
Make sure you understand why the parameters of the Linear layer are 5408 and 10.
End of explanation
fixed_model_base = nn.Sequential( # You fill this in!
)
fixed_model = fixed_model_base.type(dtype)
Explanation: PyTorch supports many other layer types, loss functions, and optimizers - you will experiment with these next. Here's the official API documentation for these (if any of the parameters used above were unclear, this resource will also be helpful). One note: what we call in the class "spatial batch norm" is called "BatchNorm2D" in PyTorch.
Layers: http://pytorch.org/docs/nn.html
Activations: http://pytorch.org/docs/nn.html#non-linear-activations
Loss functions: http://pytorch.org/docs/nn.html#loss-functions
Optimizers: http://pytorch.org/docs/optim.html#algorithms
Training a specific model
In this section, we're going to specify a model for you to construct. The goal here isn't to get good performance (that'll be next), but instead to get comfortable with understanding the PyTorch documentation and configuring your own model.
Using the code provided above as guidance, and using the following PyTorch documentation, specify a model with the following architecture:
7x7 Convolutional Layer with 32 filters and stride of 1
ReLU Activation Layer
Spatial Batch Normalization Layer
2x2 Max Pooling layer with a stride of 2
Affine layer with 1024 output units
ReLU Activation Layer
Affine layer from 1024 input units to 10 outputs
And finally, set up a cross-entropy loss function and the RMSprop learning rule.
End of explanation
## Now we're going to feed a random batch into the model you defined and make sure the output is the right size
x = torch.randn(64, 3, 32, 32).type(dtype)
x_var = Variable(x.type(dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model(x_var) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
Explanation: To make sure you're doing the right thing, use the following tool to check the dimensionality of your output (it should be 64 x 10, since our batches have size 64 and the output of the final affine layer should be 10, corresponding to our 10 classes):
End of explanation
# Verify that CUDA is properly configured and you have a GPU available
torch.cuda.is_available()
import copy
gpu_dtype = torch.cuda.FloatTensor
fixed_model_gpu = copy.deepcopy(fixed_model_base).type(gpu_dtype)
x_gpu = torch.randn(64, 3, 32, 32).type(gpu_dtype)
x_var_gpu = Variable(x.type(gpu_dtype)) # Construct a PyTorch Variable out of your input data
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
# Check to make sure what comes out of your model
# is the right dimensionality... this should be True
# if you've done everything correctly
np.array_equal(np.array(ans.size()), np.array([64, 10]))
Explanation: GPU!
Now, we're going to switch the dtype of the model and our data to the GPU-friendly tensors, and see what happens... everything is the same, except we are casting our model and input tensors as this new dtype instead of the old one.
If this returns false, or otherwise fails in a not-graceful way (i.e., with some error message), you may not have an NVIDIA GPU available on your machine. If you're running locally, we recommend you switch to Google Cloud and follow the instructions to set up a GPU there. If you're already on Google Cloud, something is wrong -- make sure you followed the instructions on how to request and use a GPU on your instance. If you did, post on Piazza or come to Office Hours so we can help you debug.
End of explanation
%%timeit
ans = fixed_model(x_var)
Explanation: Run the following cell to evaluate the performance of the forward pass running on the CPU:
End of explanation
%%timeit
torch.cuda.synchronize() # Make sure there are no pending GPU computations
ans = fixed_model_gpu(x_var_gpu) # Feed it through the model!
torch.cuda.synchronize() # Make sure there are no pending GPU computations
Explanation: ... and now the GPU:
End of explanation
loss_fn = None
optimizer = None
pass
# This sets the model in "training" mode. This is relevant for some layers that may have different behavior
# in training mode vs testing mode, such as Dropout and BatchNorm.
fixed_model_gpu.train()
# Load one batch at a time.
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(gpu_dtype))
y_var = Variable(y.type(gpu_dtype).long())
# This is the forward pass: predict the scores for each class, for each x in the batch.
scores = fixed_model_gpu(x_var)
# Use the correct y values and the predicted y values to compute the loss.
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
# Zero out all of the gradients for the variables which the optimizer will update.
optimizer.zero_grad()
# This is the backwards pass: compute the gradient of the loss with respect to each
# parameter of the model.
loss.backward()
# Actually update the parameters of the model using the gradients computed by the backwards pass.
optimizer.step()
Explanation: You should observe that even a simple forward pass like this is significantly faster on the GPU. So for the rest of the assignment (and when you go train your models in assignment 3 and your project!), you should use the GPU datatype for your model and your tensors: as a reminder that is torch.cuda.FloatTensor (in our notebook here as gpu_dtype)
Train the model.
Now that you've seen how to define a model and do a single forward pass of some data through it, let's walk through how you'd actually train one whole epoch over your training data (using the simple_model we provided above).
Make sure you understand how each PyTorch function used below corresponds to what you implemented in your custom neural network implementation.
Note that because we are not resetting the weights anywhere below, if you run the cell multiple times, you are effectively training multiple epochs (so your performance should improve).
First, set up an RMSprop optimizer (using a 1e-3 learning rate) and a cross-entropy loss function:
End of explanation
def train(model, loss_fn, optimizer, num_epochs = 1):
for epoch in range(num_epochs):
print('Starting epoch %d / %d' % (epoch + 1, num_epochs))
model.train()
for t, (x, y) in enumerate(loader_train):
x_var = Variable(x.type(gpu_dtype))
y_var = Variable(y.type(gpu_dtype).long())
scores = model(x_var)
loss = loss_fn(scores, y_var)
if (t + 1) % print_every == 0:
print('t = %d, loss = %.4f' % (t + 1, loss.data[0]))
optimizer.zero_grad()
loss.backward()
optimizer.step()
def check_accuracy(model, loader):
if loader.dataset.train:
print('Checking accuracy on validation set')
else:
print('Checking accuracy on test set')
num_correct = 0
num_samples = 0
model.eval() # Put the model in test mode (the opposite of model.train(), essentially)
for x, y in loader:
x_var = Variable(x.type(gpu_dtype), volatile=True)
scores = model(x_var)
_, preds = scores.data.cpu().max(1)
num_correct += (preds == y).sum()
num_samples += preds.size(0)
acc = float(num_correct) / num_samples
print('Got %d / %d correct (%.2f)' % (num_correct, num_samples, 100 * acc))
Explanation: Now you've seen how the training process works in PyTorch. To save you writing boilerplate code, we're providing the following helper functions to help you train for multiple epochs and check the accuracy of your model:
End of explanation
torch.cuda.random.manual_seed(12345)
fixed_model_gpu.apply(reset)
train(fixed_model_gpu, loss_fn, optimizer, num_epochs=1)
check_accuracy(fixed_model_gpu, loader_val)
Explanation: Check the accuracy of the model.
Let's see the train and check_accuracy code in action -- feel free to use these methods when evaluating the models you develop below.
You should get a training loss of around 1.2-1.4, and a validation accuracy of around 50-60%. As mentioned above, if you re-run the cells, you'll be training more epochs, so your performance will improve past these numbers.
But don't worry about getting these numbers better -- this was just practice before you tackle designing your own model.
End of explanation
# Train your model here, and make sure the output of this cell is the accuracy of your best model on the
# train, val, and test sets. Here's some code to get you started. The output of this cell should be the training
# and validation accuracy on your best model (measured by validation accuracy).
model = None
loss_fn = None
optimizer = None
train(model, loss_fn, optimizer, num_epochs=1)
check_accuracy(model, loader_val)
Explanation: Don't forget the validation set!
And note that you can use the check_accuracy function to evaluate on either the test set or the validation set, by passing either loader_test or loader_val as the second argument to check_accuracy. You should not touch the test set until you have finished your architecture and hyperparameter tuning, and only run the test set once at the end to report a final value.
Train a great model on CIFAR-10!
Now it's your job to experiment with architectures, hyperparameters, loss functions, and optimizers to train a model that achieves >=70% accuracy on the CIFAR-10 validation set. You can use the check_accuracy and train functions from above.
Things you should try:
Filter size: Above we used 7x7; this makes pretty pictures but smaller filters may be more efficient
Number of filters: Above we used 32 filters. Do more or fewer do better?
Pooling vs Strided Convolution: Do you use max pooling or just stride convolutions?
Batch normalization: Try adding spatial batch normalization after convolution layers and vanilla batch normalization after affine layers. Do your networks train faster?
Network architecture: The network above has two layers of trainable parameters. Can you do better with a deep network? Good architectures to try include:
[conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[conv-relu-conv-relu-pool]xN -> [affine]xM -> [softmax or SVM]
[batchnorm-relu-conv]xN -> [affine]xM -> [softmax or SVM]
Global Average Pooling: Instead of flattening and then having multiple affine layers, perform convolutions until your image gets small (7x7 or so) and then perform an average pooling operation to get to a 1x1 image picture (1, 1 , Filter#), which is then reshaped into a (Filter#) vector. This is used in Google's Inception Network (See Table 1 for their architecture).
Regularization: Add l2 weight regularization, or perhaps use Dropout.
Tips for training
For each network architecture that you try, you should tune the learning rate and regularization strength. When doing this there are a couple important things to keep in mind:
If the parameters are working well, you should see improvement within a few hundred iterations
Remember the coarse-to-fine approach for hyperparameter tuning: start by testing a large range of hyperparameters for just a few training iterations to find the combinations of parameters that are working at all.
Once you have found some sets of parameters that seem to work, search more finely around these parameters. You may need to train for more epochs.
You should use the validation set for hyperparameter search, and save your test set for evaluating your architecture on the best parameters as selected by the validation set.
Going above and beyond
If you are feeling adventurous there are many other features you can implement to try and improve your performance. You are not required to implement any of these; however they would be good things to try for extra credit.
Alternative update steps: For the assignment we implemented SGD+momentum, RMSprop, and Adam; you could try alternatives like AdaGrad or AdaDelta.
Alternative activation functions such as leaky ReLU, parametric ReLU, ELU, or MaxOut.
Model ensembles
Data augmentation
New Architectures
ResNets where the input from the previous layer is added to the output.
DenseNets where inputs into previous layers are concatenated together.
This blog has an in-depth overview
If you do decide to implement something extra, clearly describe it in the "Extra Credit Description" cell below.
What we expect
At the very least, you should be able to train a ConvNet that gets at least 70% accuracy on the validation set. This is just a lower bound - if you are careful it should be possible to get accuracies much higher than that! Extra credit points will be awarded for particularly high-scoring models or unique approaches.
You should use the space below to experiment and train your network.
Have fun and happy training!
End of explanation
best_model = None
check_accuracy(best_model, loader_test)
Explanation: Describe what you did
In the cell below you should write an explanation of what you did, any additional features that you implemented, and any visualizations or graphs that you make in the process of training and evaluating your network.
Tell us here!
Test set -- run this only once
Now that we've gotten a result we're happy with, we test our final model on the test set (which you should store in best_model). This would be the score we would achieve on a competition. Think about how this compares to your validation set accuracy.
End of explanation |
13,998 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous Control
In this notebook you will solve continuous control environment using either Twin Delayed DDPG (TD3) or Soft Actor-Critic (SAC). Both are off-policy algorithms that are current go-to algorithms for continuous control tasks.
Select one of these two algorithms (TD3 or SAC) to implement. Both algorithms are extensions of basic Deep Deterministic Policy Gradient (DDPG) algorithm, and DDPG is kind of "DQN with another neural net approximating greedy policy", and all that differs is a set of stabilization tricks
Step1: First, we will create an instance of the environment. In pybullet-gym, if render is called before the first reset, then you will (hopefully) see the visualisation of 3d physic environment.
Step2: Let's run random policy and see how it looks.
Step3: So, basically most episodes are 1000 steps long (then happens termination by time), though sometimes we are terminated earlier if simulation discovers some obvious reasons to think that we crashed our ant. Important thing about continuous control tasks like this is that we receive non-trivial signal at each step
Step4: This dense signal will guide our optimizations. It also partially explains why off-policy algorithms are more effective and sample-efficient than on-policy algorithms like PPO
Step5: We will add only one wrapper to our environment to simply write summaries, mainly, the total reward during an episode.
Step6: Models
Let's start with critic model. On the one hand, it will function as an approximation of $Q^*(s, a)$, on the other hand it evaluates current actor $\pi$ and can be viewed as $Q^{\pi}(s, a)$. This critic will take both state $s$ and action $a$ as input and output a scalar value. Recommended architecture is 3-layered MLP.
Danger
Step7: Next, let's define a policy, or an actor $\pi$. Use architecture, similar to critic (3-layered MLP). The output depends on algorithm
Step8: For SAC, model gaussian policy. This means policy distribution is going to be multivariate normal with diagonal covariance. The policy head will predict the mean and covariance, and it should be guaranteed that covariance is non-negative. Important
Step12: ReplayBuffer
The same as in DQN. You can copy code from your DQN assignment, just check that it works fine with continuous actions (probably it is).
Let's recall the interface
Step13: Initialization
Let's start initializing our algorithm. Here is our hyperparameters
Step14: Here is our experience replay
Step15: Here is our models
Step16: To stabilize training, we will require target networks - slow updating copies of our models. In TD3, both critics and actor have their copies, in SAC it is assumed that only critics require target copies while actor is always used fresh.
Step17: In continuous control, target networks are usually updated using exponential smoothing
Step18: Finally, we will have three optimization procedures to train our three models, so let's welcome our three Adams
Step19: Critic target computation
Finally, let's discuss our losses for critic and actor.
To train both critics we would like to minimize MSE using 1-step targets
Step20: To train actor we want simply to maximize
$$\mathbb{E}{a \sim \pi(a \mid s)} Q(s, a) \to \max{\pi}$$
in TD3, because of deterministic policy, the expectation reduces
Step21: Pipeline
Finally combining all together and launching our algorithm. Your goal is to reach at least 1000 average reward during evaluation after training in this ant environment (since this is a new hometask, this threshold might be updated, so at least just see if your ant learned to walk in the rendered simulation).
rewards should rise more or less steadily in this environment. There can be some drops due to instabilities of algorithm, but it should eventually start rising after 100K-200K iterations. If no progress in reward is observed after these first 100K-200K iterations, there is a bug.
gradient norm appears to be quite big for this task, it is ok if it reaches 100-200 (we handled it with clip_grad_norm). Consider everything exploded if it starts growing exponentially, then there is a bug.
Step22: Evaluation
Step23: Record | Python Code:
!git clone https://github.com/benelot/pybullet-gym lib/pybullet-gym
!pip install -e lib/pybullet-gym
import gym
import numpy as np
import pybulletgym
Explanation: Continuous Control
In this notebook you will solve continuous control environment using either Twin Delayed DDPG (TD3) or Soft Actor-Critic (SAC). Both are off-policy algorithms that are current go-to algorithms for continuous control tasks.
Select one of these two algorithms (TD3 or SAC) to implement. Both algorithms are extensions of basic Deep Deterministic Policy Gradient (DDPG) algorithm, and DDPG is kind of "DQN with another neural net approximating greedy policy", and all that differs is a set of stabilization tricks:
* TD3 trains deterministic policy, while SAC uses stochastic policy. This means that for SAC you can solve exploration-exploitation trade-off by simple sampling from policy, while in TD3 you will have to add noise to your actions.
* TD3 proposes to stabilize targets by adding a clipped noise to actions, which slightly prevents overestimation. In SAC, we formally switch to formalism of Maximum Entropy RL and add entropy bonus into our value function.
Also both algorithms utilize a twin trick: train two critics and use pessimistic targets by taking minimum from two proposals. Standard trick with target networks is also necessary. We will go through all these tricks step-by-step.
SAC is probably less clumsy scheme than TD3, but requires a bit more code to implement. More detailed description of algorithms can be found in Spinning Up documentation:
* on DDPG
* on TD3
* on SAC
Environment
For now, let's start with our environment. To run the environment you will need to install
pybullet-gym which unlike MuJoCo
does not require you to have a license.
Recently there were some weird troubles with pybullet :(, if nothing works try ver.2.5.6 : pip install pybullet==2.5.6
To install the library:
End of explanation
env = gym.make("AntPyBulletEnv-v0")
# we want to look inside
env.render()
# examples of states and actions
print("observation space: ", env.observation_space,
"\nobservations:", env.reset())
print("action space: ", env.action_space,
"\naction_sample: ", env.action_space.sample())
Explanation: First, we will create an instance of the environment. In pybullet-gym, if render is called before the first reset, then you will (hopefully) see the visualisation of 3d physic environment.
End of explanation
class RandomActor():
def get_action(self, states):
assert len(states.shape) == 1, "can't work with batches"
return env.action_space.sample()
s = env.reset()
rewards_per_step = []
actor = RandomActor()
for i in range(10000):
a = actor.get_action(s)
s, r, done, _ = env.step(a)
rewards_per_step.append(r)
if done:
s = env.reset()
print("done: ", i)
Explanation: Let's run random policy and see how it looks.
End of explanation
rewards_per_step[100:110]
Explanation: So, basically most episodes are 1000 steps long (then happens termination by time), though sometimes we are terminated earlier if simulation discovers some obvious reasons to think that we crashed our ant. Important thing about continuous control tasks like this is that we receive non-trivial signal at each step:
End of explanation
env.close()
Explanation: This dense signal will guide our optimizations. It also partially explains why off-policy algorithms are more effective and sample-efficient than on-policy algorithms like PPO: 1-step targets are already quite informative.
End of explanation
from logger import TensorboardSummaries as Summaries
env = gym.make("AntPyBulletEnv-v0")
env = Summaries(env, "MyFirstWalkingAnt");
state_dim = env.observation_space.shape[0] # dimension of state space (28 numbers)
action_dim = env.action_space.shape[0] # dimension of action space (8 numbers)
Explanation: We will add only one wrapper to our environment to simply write summaries, mainly, the total reward during an episode.
End of explanation
import torch
import torch.nn as nn
DEVICE = "cuda" if torch.cuda.is_available() else "cpu"
class Critic(nn.Module):
def __init__(self, state_dim, action_dim):
super().__init__()
<YOUR CODE>
def get_qvalues(self, states, actions):
'''
input:
states - tensor, (batch_size x features)
actions - tensor, (batch_size x actions_dim)
output:
qvalues - tensor, critic estimation, (batch_size)
'''
qvalues = <YOUR CODE>
assert len(qvalues.shape) == 1 and qvalues.shape[0] == states.shape[0]
return qvalues
Explanation: Models
Let's start with critic model. On the one hand, it will function as an approximation of $Q^*(s, a)$, on the other hand it evaluates current actor $\pi$ and can be viewed as $Q^{\pi}(s, a)$. This critic will take both state $s$ and action $a$ as input and output a scalar value. Recommended architecture is 3-layered MLP.
Danger: when models have a scalar output it is a good rule to squeeze it to avoid unexpected broadcasting, since [batch_size, 1] broadcasts with many tensor sizes.
End of explanation
# template for TD3; template for SAC is below
class TD3_Actor(nn.Module):
def __init__(self, state_dim, action_dim):
super().__init__()
<YOUR CODE>
def get_action(self, states, std_noise=0.1):
'''
Used to collect data by interacting with environment,
so your have to add some noise to actions.
input:
states - numpy, (batch_size x features)
output:
actions - numpy, (batch_size x actions_dim)
'''
# no gradient computation is required here since we will use this only for interaction
with torch.no_grad():
actions = <YOUR CODE>
assert isinstance(actions, (list,np.ndarray)), "convert actions to numpy to send into env"
assert actions.max() <= 1. and actions.min() >= -1, "actions must be in the range [-1, 1]"
return actions
def get_best_action(self, states):
'''
Will be used to optimize actor. Requires differentiable w.r.t. parameters actions.
input:
states - PyTorch tensor, (batch_size x features)
output:
actions - PyTorch tensor, (batch_size x actions_dim)
'''
actions = <YOUR CODE>
assert actions.requires_grad, "you must be able to compute gradients through actions"
return actions
def get_target_action(self, states, std_noise=0.2, clip_eta=0.5):
'''
Will be used to create target for critic optimization.
Returns actions with added "clipped noise".
input:
states - PyTorch tensor, (batch_size x features)
output:
actions - PyTorch tensor, (batch_size x actions_dim)
'''
# no gradient computation is required here since we will use this only for interaction
with torch.no_grad():
actions = <YOUR CODE>
# actions can fly out of [-1, 1] range after added noise
return actions.clamp(-1, 1)
Explanation: Next, let's define a policy, or an actor $\pi$. Use architecture, similar to critic (3-layered MLP). The output depends on algorithm:
For TD3, model deterministic policy. You should output action_dim numbers in range $[-1, 1]$. Unfortunately, deterministic policies lead to problems with stability and exploration, so we will need three "modes" of how this policy can be operating:
* First one - greedy - is a simple feedforward pass through network that will be used to train the actor.
* Second one - exploration mode - is when we need to add noise (e.g. Gaussian) to our actions to collect more diverse data.
* Third mode - "clipped noised" - will be used when we will require a target for critic, where we need to somehow "noise" our actor output, but not too much, so we add clipped noise to our output:
$$\pi_{\theta}(s) + \varepsilon, \quad \varepsilon = \operatorname{clip}(\epsilon, -0.5, 0.5), \epsilon \sim \mathcal{N}(0, \sigma^2 I)$$
End of explanation
# template for SAC
from torch.distributions import Normal
class SAC_Actor(nn.Module):
def __init__(self, state_dim, action_dim):
super().__init__()
<YOUR CODE>
def apply(self, states):
'''
For given batch of states samples actions and also returns its log prob.
input:
states - PyTorch tensor, (batch_size x features)
output:
actions - PyTorch tensor, (batch_size x action_dim)
log_prob - PyTorch tensor, (batch_size)
'''
<YOUR CODE>
return actions, log_prob
def get_action(self, states):
'''
Used to interact with environment by sampling actions from policy
input:
states - numpy, (batch_size x features)
output:
actions - numpy, (batch_size x actions_dim)
'''
# no gradient computation is required here since we will use this only for interaction
with torch.no_grad():
# hint: you can use `apply` method here
actions = <YOUR CODE>
assert isinstance(actions, (list,np.ndarray)), "convert actions to numpy to send into env"
assert actions.max() <= 1. and actions.min() >= -1, "actions must be in the range [-1, 1]"
return actions
Explanation: For SAC, model gaussian policy. This means policy distribution is going to be multivariate normal with diagonal covariance. The policy head will predict the mean and covariance, and it should be guaranteed that covariance is non-negative. Important: the way you model covariance strongly influences optimization procedure, so here are some options: let $f_{\theta}$ be the output of covariance head, then:
* use exponential function $\sigma(s) = \exp(f_{\theta}(s))$
* transform output to $[-1, 1]$ using tanh, then project output to some interval $[m, M]$, where $m = -20$, $M = 2$ and then use exponential function. This will guarantee the range of modeled covariance is adequate. So, the resulting formula is:
$$\sigma(s) = \exp^{m + 0.5(M - m)(\tanh(f_{\theta}(s)) + 1)}$$
* softplus operation $\sigma(s) = \log(1 + \exp^{f_{\theta}(s)})$ seems to work poorly here. o_O
Note: torch.distributions.Normal already has everything you will need to work with such policy after you modeled mean and covariance, i.e. sampling via reparametrization trick (see rsample method) and compute log probability (see log_prob method).
There is one more problem with gaussian distribution. We need to force our actions to be in $[-1, 1]$ bound. To achieve this, model unbounded gaussian $\mathcal{N}(\mu_{\theta}(s), \sigma_{\theta}(s)^2I)$, where $\mu$ can be arbitrary. Then every time you have samples $u$ from this gaussian policy, squash it using $\operatorname{tanh}$ function to get a sample from $[-1, 1]$:
$$u \sim \mathcal{N}(\mu, \sigma^2I)$$
$$a = \operatorname{tanh}(u)$$
Important: after that you are required to use change of variable formula every time you compute likelihood (see appendix C in paper on SAC for details):
$$\log p(a \mid \mu, \sigma) = \log p(u \mid \mu, \sigma) - \sum_{i = 1}^D \log (1 - \operatorname{tanh}^2(u_i)),$$
where $D$ is action_dim. In practice, add something like 1e-6 inside logarithm to protect from computational instabilities.
End of explanation
class ReplayBuffer():
def __init__(self, size):
Create Replay buffer.
Parameters
----------
size: int
Max number of transitions to store in the buffer. When the buffer
overflows the old memories are dropped.
Note: for this assignment you can pick any data structure you want.
If you want to keep it simple, you can store a list of tuples of (s, a, r, s') in self._storage
However you may find out there are faster and/or more memory-efficient ways to do so.
self._storage = []
self._maxsize = size
# OPTIONAL: YOUR CODE
def __len__(self):
return len(self._storage)
def add(self, obs_t, action, reward, obs_tp1, done):
'''
Make sure, _storage will not exceed _maxsize.
Make sure, FIFO rule is being followed: the oldest examples has to be removed earlier
'''
data = (obs_t, action, reward, obs_tp1, done)
storage = self._storage
maxsize = self._maxsize
<YOUR CODE>
# add data to storage
def sample(self, batch_size):
Sample a batch of experiences.
Parameters
----------
batch_size: int
How many transitions to sample.
Returns
-------
obs_batch: np.array
batch of observations
act_batch: np.array
batch of actions executed given obs_batch
rew_batch: np.array
rewards received as results of executing act_batch
next_obs_batch: np.array
next set of observations seen after executing act_batch
done_mask: np.array
done_mask[i] = 1 if executing act_batch[i] resulted in
the end of an episode and 0 otherwise.
storage = self._storage
<YOUR CODE>
# randomly generate batch_size integers
# to be used as indexes of samples
<YOUR CODE>
# collect <s,a,r,s',done> for each index
return <YOUR CODE>
# <states>, <actions>, <rewards>, <next_states>, <is_done>
exp_replay = ReplayBuffer(10)
for _ in range(30):
exp_replay.add(env.reset(), env.action_space.sample(),
1.0, env.reset(), done=False)
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(
5)
assert len(exp_replay) == 10, "experience replay size should be 10 because that's what maximum capacity is"
def play_and_record(initial_state, agent, env, exp_replay, n_steps=1):
Play the game for exactly n steps, record every (s,a,r,s', done) to replay buffer.
Whenever game ends, add record with done=True and reset the game.
It is guaranteed that env has done=False when passed to this function.
:returns: return sum of rewards over time and the state in which the env stays
s = initial_state
sum_rewards = 0
# Play the game for n_steps as per instructions above
for t in range(n_steps):
# select action using policy with exploration
a = <YOUR CODE>
ns, r, done, _ = env.step(a)
exp_replay.add(s, a, r, ns, done)
s = env.reset() if done else ns
sum_rewards += r
return sum_rewards, s
#testing your code.
exp_replay = ReplayBuffer(2000)
actor = <YOUR ACTOR CLASS>(state_dim, action_dim).to(DEVICE)
state = env.reset()
play_and_record(state, actor, env, exp_replay, n_steps=1000)
# if you're using your own experience replay buffer, some of those tests may need correction.
# just make sure you know what your code does
assert len(exp_replay) == 1000, "play_and_record should have added exactly 1000 steps, "\
"but instead added %i" % len(exp_replay)
is_dones = list(zip(*exp_replay._storage))[-1]
assert 0 < np.mean(is_dones) < 0.1, "Please make sure you restart the game whenever it is 'done' and record the is_done correctly into the buffer."\
"Got %f is_done rate over %i steps. [If you think it's your tough luck, just re-run the test]" % (
np.mean(is_dones), len(exp_replay))
for _ in range(100):
obs_batch, act_batch, reward_batch, next_obs_batch, is_done_batch = exp_replay.sample(
10)
assert obs_batch.shape == next_obs_batch.shape == (10,) + (state_dim,)
assert act_batch.shape == (
10, action_dim), "actions batch should have shape (10, 8) but is instead %s" % str(act_batch.shape)
assert reward_batch.shape == (
10,), "rewards batch should have shape (10,) but is instead %s" % str(reward_batch.shape)
assert is_done_batch.shape == (
10,), "is_done batch should have shape (10,) but is instead %s" % str(is_done_batch.shape)
assert [int(i) in (0, 1)
for i in is_dones], "is_done should be strictly True or False"
print("Well done!")
Explanation: ReplayBuffer
The same as in DQN. You can copy code from your DQN assignment, just check that it works fine with continuous actions (probably it is).
Let's recall the interface:
* exp_replay.add(obs, act, rw, next_obs, done) - saves (s,a,r,s',done) tuple into the buffer
* exp_replay.sample(batch_size) - returns observations, actions, rewards, next_observations and is_done for batch_size random samples.
* len(exp_replay) - returns number of elements stored in replay buffer.
End of explanation
gamma=0.99 # discount factor
max_buffer_size = 10**5 # size of experience replay
start_timesteps = 5000 # size of experience replay when start training
timesteps_per_epoch=1 # steps in environment per step of network updates
batch_size=128 # batch size for all optimizations
max_grad_norm=10 # max grad norm for all optimizations
tau=0.005 # speed of updating target networks
policy_update_freq=<> # frequency of actor update; vanilla choice is 2 for TD3 or 1 for SAC
alpha=0.1 # temperature for SAC
# iterations passed
n_iterations = 0
Explanation: Initialization
Let's start initializing our algorithm. Here is our hyperparameters:
End of explanation
# experience replay
exp_replay = ReplayBuffer(max_buffer_size)
Explanation: Here is our experience replay:
End of explanation
# models to train
actor = <YOUR ACTOR CLASS>(state_dim, action_dim).to(DEVICE)
critic1 = Critic(state_dim, action_dim).to(DEVICE)
critic2 = Critic(state_dim, action_dim).to(DEVICE)
Explanation: Here is our models: two critics and one actor.
End of explanation
# target networks: slow-updated copies of actor and two critics
target_critic1 = Critic(state_dim, action_dim).to(DEVICE)
target_critic2 = Critic(state_dim, action_dim).to(DEVICE)
target_actor = TD3_Actor(state_dim, action_dim).to(DEVICE) # comment this line if you chose SAC
# initialize them as copies of original models
target_critic1.load_state_dict(critic1.state_dict())
target_critic2.load_state_dict(critic2.state_dict())
target_actor.load_state_dict(actor.state_dict()) # comment this line if you chose SAC
Explanation: To stabilize training, we will require target networks - slow updating copies of our models. In TD3, both critics and actor have their copies, in SAC it is assumed that only critics require target copies while actor is always used fresh.
End of explanation
def update_target_networks(model, target_model):
for param, target_param in zip(model.parameters(), target_model.parameters()):
target_param.data.copy_(tau * param.data + (1 - tau) * target_param.data)
Explanation: In continuous control, target networks are usually updated using exponential smoothing:
$$\theta^{-} \leftarrow \tau \theta + (1 - \tau) \theta^{-},$$
where $\theta^{-}$ are target network weights, $\theta$ - fresh parameters, $\tau$ - hyperparameter. This util function will do it:
End of explanation
# optimizers: for every model we have
opt_actor = torch.optim.Adam(actor.parameters(), lr=3e-4)
opt_critic1 = torch.optim.Adam(critic1.parameters(), lr=3e-4)
opt_critic2 = torch.optim.Adam(critic2.parameters(), lr=3e-4)
# just to avoid writing this code three times
def optimize(name, model, optimizer, loss):
'''
Makes one step of SGD optimization, clips norm with max_grad_norm and
logs everything into tensorboard
'''
loss = loss.mean()
optimizer.zero_grad()
loss.backward()
grad_norm = nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm)
optimizer.step()
# logging
env.writer.add_scalar(name, loss.item(), n_iterations)
env.writer.add_scalar(name + "_grad_norm", grad_norm.item(), n_iterations)
Explanation: Finally, we will have three optimization procedures to train our three models, so let's welcome our three Adams:
End of explanation
def compute_critic_target(rewards, next_states, is_done):
'''
Important: use target networks for this method! Do not use "fresh" models except fresh policy in SAC!
input:
rewards - PyTorch tensor, (batch_size)
next_states - PyTorch tensor, (batch_size x features)
is_done - PyTorch tensor, (batch_size)
output:
critic target - PyTorch tensor, (batch_size)
'''
with torch.no_grad():
critic_target = <YOUR CODE>
assert not critic_target.requires_grad, "target must not require grad."
assert len(critic_target.shape) == 1, "dangerous extra dimension in target?"
return critic_target
Explanation: Critic target computation
Finally, let's discuss our losses for critic and actor.
To train both critics we would like to minimize MSE using 1-step targets: for one sampled transition $(s, a, r, s')$ it should look something like this:
$$y(s, a) = r + \gamma V(s').$$
How do we evaluate next state and compute $V(s')$? Well, technically Monte-Carlo estimation looks simple:
$$V(s') \approx Q(s', a')$$
where (important!) $a'$ is a sample from our current policy $\pi(a' \mid s')$.
But out actor $\pi$ will be actually trained to search for actions $a'$ where our critic gives big estimates, and this straightforward approach leads to serious overesimation issues. We require some hacks. First, we will use target networks for $Q$ (and TD3 also uses target network for $\pi$). Second, we will use two critics and take minimum across their estimations:
$$V(s') = \min_{i = 1,2} Q^{-}_i(s', a'),$$
where $a'$ is sampled from target policy $\pi^{-}(a' \mid s')$ in TD3 and from fresh policy $\pi(a' \mid s')$ in SAC.
And the last but not the least:
in TD3 to compute $a'$ use mode with clipped noise that will prevent our policy from exploiting narrow peaks in our critic approximation;
in SAC add (estimation of) entropy bonus in next state $s'$:
$$V(s') = \min_{i = 1,2} Q^{-}_i(s', a') - \alpha \log \pi (a' \mid s')$$
End of explanation
def compute_actor_loss(states):
'''
Returns actor loss on batch of states
input:
states - PyTorch tensor, (batch_size x features)
output:
actor loss - PyTorch tensor, (batch_size)
'''
# make sure you have gradients w.r.t. actor parameters
actions = <YOUR CODE>
assert actions.requires_grad, "actions must be differentiable with respect to policy parameters"
# compute actor loss
actor_loss = <YOUR CODE>
return actor_loss
Explanation: To train actor we want simply to maximize
$$\mathbb{E}{a \sim \pi(a \mid s)} Q(s, a) \to \max{\pi}$$
in TD3, because of deterministic policy, the expectation reduces:
$$Q(s, \pi(s)) \to \max_{\pi}$$
in SAC, use reparametrization trick to compute gradients and also do not forget to add entropy regularizer to motivate policy to be as stochastic as possible:
$$\mathbb{E}{a \sim \pi(a \mid s)} Q(s, a) - \alpha \log \pi(a \mid s) \to \max{\pi}$$
Note: We will use (fresh) critic1 here as Q-functon to "exploit". You can also use both critics and again take minimum across their estimations (this is done in original implementation of SAC and not done in TD3), but this seems to be not of high importance.
End of explanation
seed = <YOUR FAVOURITE RANDOM SEED>
np.random.seed(seed)
env.unwrapped.seed(seed)
torch.manual_seed(seed);
from tqdm.notebook import trange
interaction_state = env.reset()
random_actor = RandomActor()
for n_iterations in trange(0, 1000000, timesteps_per_epoch):
# if experience replay is small yet, no training happens
# we also collect data using random policy to collect more diverse starting data
if len(exp_replay) < start_timesteps:
_, interaction_state = play_and_record(interaction_state, random_actor, env, exp_replay, timesteps_per_epoch)
continue
# perform a step in environment and store it in experience replay
_, interaction_state = play_and_record(interaction_state, actor, env, exp_replay, timesteps_per_epoch)
# sample a batch from experience replay
states, actions, rewards, next_states, is_done = exp_replay.sample(batch_size)
# move everything to PyTorch tensors
states = torch.tensor(states, device=DEVICE, dtype=torch.float)
actions = torch.tensor(actions, device=DEVICE, dtype=torch.float)
rewards = torch.tensor(rewards, device=DEVICE, dtype=torch.float)
next_states = torch.tensor(next_states, device=DEVICE, dtype=torch.float)
is_done = torch.tensor(
is_done.astype('float32'),
device=DEVICE,
dtype=torch.float
)
# losses
critic1_loss = <YOUR CODE>
optimize("critic1", critic1, opt_critic1, critic1_loss)
critic2_loss = <YOUR CODE>
optimize("critic2", critic2, opt_critic2, critic2_loss)
# actor update is less frequent in TD3
if n_iterations % policy_update_freq == 0:
actor_loss = <YOUR CODE>
optimize("actor", actor, opt_actor, actor_loss)
# update target networks
update_target_networks(critic1, target_critic1)
update_target_networks(critic2, target_critic2)
update_target_networks(actor, target_actor) # comment this line if you chose SAC
Explanation: Pipeline
Finally combining all together and launching our algorithm. Your goal is to reach at least 1000 average reward during evaluation after training in this ant environment (since this is a new hometask, this threshold might be updated, so at least just see if your ant learned to walk in the rendered simulation).
rewards should rise more or less steadily in this environment. There can be some drops due to instabilities of algorithm, but it should eventually start rising after 100K-200K iterations. If no progress in reward is observed after these first 100K-200K iterations, there is a bug.
gradient norm appears to be quite big for this task, it is ok if it reaches 100-200 (we handled it with clip_grad_norm). Consider everything exploded if it starts growing exponentially, then there is a bug.
End of explanation
def evaluate(env, actor, n_games=1, t_max=1000):
'''
Plays n_games and returns rewards and rendered games
'''
rewards = []
for _ in range(n_games):
s = env.reset()
R = 0
for _ in range(t_max):
# select action for final evaluation of your policy
action = <YOUR CODE>
assert (action.max() <= 1).all() and (action.min() >= -1).all()
s, r, done, _ = env.step(action)
R += r
if done:
break
rewards.append(R)
return np.array(rewards)
# evaluation will take some time!
sessions = evaluate(env, actor, n_games=20)
score = sessions.mean()
print(f"Your score: {score}")
assert score >= 1000, "Needs more training?"
print("Well done!")
env.close()
Explanation: Evaluation
End of explanation
env = gym.make("AntPyBulletEnv-v0")
# we want to look inside
env.render(mode="human")
# let's hope this will work
# don't forget to pray
env = gym.wrappers.Monitor(env, directory="videos", force=True)
# record sessions
# note that t_max is 300, so collected reward will be smaller than 1000
evaluate(env, actor, n_games=1, t_max=300)
env.close()
Explanation: Record
End of explanation |
13,999 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
Step1: In this chapter, we are going to take a look at how to perform statistical inference on graphs.
Statistics refresher
Before we can proceed with statistical inference on graphs,
we must first refresh ourselves with some ideas from the world of statistics.
Otherwise, the methods that we will end up using
may seem a tad weird, and hence difficult to follow along.
To review statistical ideas,
let's set up a few statements and explore what they mean.
We are concerned with models of randomness
As with all things statistics, we are concerned with models of randomness.
Here, probability distributions give us a way to think about random events
and how to assign credibility points to them.
In an abstract fashion...
The supremely abstract way of thinking about a probability distribution
is that it is the space of all possibilities of "stuff"
with different credibility points distributed amongst each possible "thing".
More concretely
Step2: You can verify that there's approximately 20% of $\frac{30^2 - 30}{2} = 435$.
Step3: We can also look at the degree distribution
Step4: Barabasi-Albert Graph
The data generating story of this graph generator is essentially that nodes that have lots of edges preferentially get new edges attached onto them.
This is what we call a "preferential attachment" process.
Step5: And the degree distribution
Step6: You can see that even though the number of edges between the two graphs are similar,
their degree distribution is wildly different.
Load Data
For this notebook, we are going to look at a protein-protein interaction network,
and test the hypothesis that this network was not generated by the data generating process
described by an Erdos-Renyi graph.
Let's load a protein-protein interaction network dataset.
This undirected network contains protein interactions contained in yeast.
Research showed that proteins with a high degree
were more important for the surivial of the yeast than others.
A node represents a protein and an edge represents a metabolic interaction between two proteins.
The network contains loops.
Step7: As is always the case, let's make sure we know some basic stats of the graph.
Step8: Let's also examine the degree distribution of the graph.
Step9: Finally, we should visualize the graph to get a feel for it.
Step10: One thing we might infer from this visualization
is that the vast majority of nodes have a very small degree,
while a very small number of nodes have a high degree.
That would prompt us to think
Step11: Comparison with Erdos-Renyi graphs
Step14: Given the degree distribution only, which model do you think better describes the generation of a protein-protein interaction network?
Quantitative Model Comparison
Each time we plug in a value of $m$ for the Barabasi-Albert graph model, we are using one of many possible Barabasi-Albert graph models, each with a different $m$.
Similarly, each time we choose a different $p$ for the Erdos-Renyi model, we are using one of many possible Erdos-Renyi graph models, each with a different $p$.
To quantitatively compare degree distributions, we can use the Wasserstein distance between the data.
Let's see how to implement this.
Step15: Notice that because the graphs are instantiated in a non-deterministic fashion, re-running the cell above will give you different values for each new graph generated.
Let's now plot the wasserstein distance to our graph data for the two particular Erdos-Renyi and Barabasi-Albert graph models shown above. | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo(id="P-0CJpO3spg", width="100%")
Explanation: Introduction
End of explanation
import networkx as nx
G_er = nx.erdos_renyi_graph(n=30, p=0.2)
nx.draw(G_er)
Explanation: In this chapter, we are going to take a look at how to perform statistical inference on graphs.
Statistics refresher
Before we can proceed with statistical inference on graphs,
we must first refresh ourselves with some ideas from the world of statistics.
Otherwise, the methods that we will end up using
may seem a tad weird, and hence difficult to follow along.
To review statistical ideas,
let's set up a few statements and explore what they mean.
We are concerned with models of randomness
As with all things statistics, we are concerned with models of randomness.
Here, probability distributions give us a way to think about random events
and how to assign credibility points to them.
In an abstract fashion...
The supremely abstract way of thinking about a probability distribution
is that it is the space of all possibilities of "stuff"
with different credibility points distributed amongst each possible "thing".
More concretely: the coin flip
A more concrete example is to consider the coin flip.
Here, the space of all possibilities of "stuff" is the set of "heads" and "tails".
If we have a fair coin, then we have 0.5 credibility points distributed
to each of "heads" and "tails".
Another example: dice rolls
Another concrete example is to consider the six-sided dice.
Here, the space of all possibilities of "stuff" is the set of numbers in the range $[1, 6]$.
If we have a fair dice, then we have 1/6 credibility points assigned
to each of the numbers.
(Unfair dice will have an unequal distribution of credibility points across each face.)
A graph-based example: social networks
If we receive an undirected social network graph with 5 nodes and 6 edges,
we have to keep in mind that this graph with 6 edges
was merely one of $15 \choose 6$ ways to construct 5 node, 6 edge graphs.
(15 comes up because there are 15 edges that can be constructed in a 5-node undirected graph.)
Hypothesis Testing
A commonplace task in statistical inferences
is calculating the probability of observing a value or something more extreme
under an assumed "null" model of reality.
This is what we commonly call "hypothesis testing",
and where the oft-misunderstood term "p-value" shows up.
Hypothesis testing in coin flips, by simulation
As an example, hypothesis testing in coin flips follows this logic:
I observe that 8 out of 10 coin tosses give me heads, giving me a probability of heads $p=0.8$ (a summary statistic).
Under a "null distribution" of a fair coin, I simulate the distribution of probability of heads (the summary statistic) that I would get from 10 coin tosses.
Finally, I use that distribution to calculate the probability of observing $p=0.8$ or more extreme.
Hypothesis testing in graphs
The same protocol applies when we perform hypothesis testing on graphs.
Firstly, we calculate a summary statistic that describes our graph.
Secondly, we propose a null graph model, and calculate our summary statistic under simulated versions of that null graph model.
Thirdly, we look at the probability of observing the summary statistic value that we calculated in step 1 or more extreme, under the assumed graph null model distribution.
Stochastic graph creation models
Since we are going to be dealing with models of randomness in graphs,
let's take a look at some examples.
Erdos-Renyi (a.k.a. "binomial") graph
On easy one to study is the Erdos-Renyi graph, also known as the "binomial" graph.
The data generation story here is that we instantiate an undirected graph with $n$ nodes,
giving $\frac{n^2 - n}{2}$ possible edges.
Each edge has a probability $p$ of being created.
End of explanation
len(G_er.edges())
len(G_er.edges()) / 435
Explanation: You can verify that there's approximately 20% of $\frac{30^2 - 30}{2} = 435$.
End of explanation
import pandas as pd
from nams.functions import ecdf
import matplotlib.pyplot as plt
x, y = ecdf(pd.Series(dict(nx.degree(G_er))))
plt.scatter(x, y)
Explanation: We can also look at the degree distribution:
End of explanation
G_ba = nx.barabasi_albert_graph(n=30, m=3)
nx.draw(G_ba)
len(G_ba.edges())
Explanation: Barabasi-Albert Graph
The data generating story of this graph generator is essentially that nodes that have lots of edges preferentially get new edges attached onto them.
This is what we call a "preferential attachment" process.
End of explanation
x, y = ecdf(pd.Series(dict(nx.degree(G_ba))))
plt.scatter(x, y)
Explanation: And the degree distribution:
End of explanation
from nams import load_data as cf
G = cf.load_propro_network()
for n, d in G.nodes(data=True):
G.nodes[n]["degree"] = G.degree(n)
Explanation: You can see that even though the number of edges between the two graphs are similar,
their degree distribution is wildly different.
Load Data
For this notebook, we are going to look at a protein-protein interaction network,
and test the hypothesis that this network was not generated by the data generating process
described by an Erdos-Renyi graph.
Let's load a protein-protein interaction network dataset.
This undirected network contains protein interactions contained in yeast.
Research showed that proteins with a high degree
were more important for the surivial of the yeast than others.
A node represents a protein and an edge represents a metabolic interaction between two proteins.
The network contains loops.
End of explanation
len(G.nodes())
len(G.edges())
Explanation: As is always the case, let's make sure we know some basic stats of the graph.
End of explanation
x, y = ecdf(pd.Series(dict(nx.degree(G))))
plt.scatter(x, y)
Explanation: Let's also examine the degree distribution of the graph.
End of explanation
import nxviz as nv
from nxviz import annotate
nv.circos(G, sort_by="degree", node_color_by="degree", node_enc_kwargs={"size_scale": 10})
annotate.node_colormapping(G, color_by="degree")
Explanation: Finally, we should visualize the graph to get a feel for it.
End of explanation
from ipywidgets import interact, IntSlider
m = IntSlider(value=2, min=1, max=10)
@interact(m=m)
def compare_barabasi_albert_graph(m):
fig, ax = plt.subplots()
G_ba = nx.barabasi_albert_graph(n=len(G.nodes()), m=m)
x, y = ecdf(pd.Series(dict(nx.degree(G_ba))))
ax.scatter(x, y, label="Barabasi-Albert Graph")
x, y = ecdf(pd.Series(dict(nx.degree(G))))
ax.scatter(x, y, label="Protein Interaction Network")
ax.legend()
Explanation: One thing we might infer from this visualization
is that the vast majority of nodes have a very small degree,
while a very small number of nodes have a high degree.
That would prompt us to think:
what process could be responsible for generating this graph?
Inferring Graph Generating Model
Given a graph dataset, how do we identify which data generating model provides the best fit?
One way to do this is to compare characteristics of a graph generating model against the characteristics of the graph.
The logic here is that if we have a good graph generating model for the data,
we should, in theory, observe the observed graph's characteristics
in the graphs generated by the graph generating model.
Comparison of degree distribution
Let's compare the degree distribution between the data, a few Erdos-Renyi graphs, and a few Barabasi-Albert graphs.
Comparison with Barabasi-Albert graphs
End of explanation
from ipywidgets import FloatSlider
p = FloatSlider(value=0.6, min=0, max=0.1, step=0.001)
@interact(p=p)
def compare_erdos_renyi_graph(p):
fig, ax = plt.subplots()
G_er = nx.erdos_renyi_graph(n=len(G.nodes()), p=p)
x, y = ecdf(pd.Series(dict(nx.degree(G_er))))
ax.scatter(x, y, label="Erdos-Renyi Graph")
x, y = ecdf(pd.Series(dict(nx.degree(G))))
ax.scatter(x, y, label="Protein Interaction Network")
ax.legend()
ax.set_title(f"p={p}")
Explanation: Comparison with Erdos-Renyi graphs
End of explanation
from scipy.stats import wasserstein_distance
def erdos_renyi_degdist(n, p):
Return a Pandas series of degree distribution of an Erdos-Renyi graph.
G = nx.erdos_renyi_graph(n=n, p=p)
return pd.Series(dict(nx.degree(G)))
def barabasi_albert_degdist(n, m):
Return a Pandas series of degree distribution of an Barabasi-Albert graph.
G = nx.barabasi_albert_graph(n=n, m=m)
return pd.Series(dict(nx.degree(G)))
deg = pd.Series(dict(nx.degree(G)))
er_deg = erdos_renyi_degdist(n=len(G.nodes()), p=0.001)
ba_deg = barabasi_albert_degdist(n=len(G.nodes()), m=1)
wasserstein_distance(deg, er_deg), wasserstein_distance(deg, ba_deg)
Explanation: Given the degree distribution only, which model do you think better describes the generation of a protein-protein interaction network?
Quantitative Model Comparison
Each time we plug in a value of $m$ for the Barabasi-Albert graph model, we are using one of many possible Barabasi-Albert graph models, each with a different $m$.
Similarly, each time we choose a different $p$ for the Erdos-Renyi model, we are using one of many possible Erdos-Renyi graph models, each with a different $p$.
To quantitatively compare degree distributions, we can use the Wasserstein distance between the data.
Let's see how to implement this.
End of explanation
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
er_dist = []
ba_dist = []
for _ in tqdm(range(100)):
er_deg = erdos_renyi_degdist(n=len(G.nodes()), p=0.001)
er_dist.append(wasserstein_distance(deg, er_deg))
ba_deg = barabasi_albert_degdist(n=len(G.nodes()), m=1)
ba_dist.append(wasserstein_distance(deg, ba_deg))
# er_degs = [erdos_renyi_degdist(n=len(G.nodes()), p=0.001) for _ in range(100)]
import seaborn as sns
import janitor
data = (
pd.DataFrame(
{
"Erdos-Renyi": er_dist,
"Barabasi-Albert": ba_dist,
}
)
.melt(value_vars=["Erdos-Renyi", "Barabasi-Albert"])
.rename_columns({"variable": "Graph Model", "value": "Wasserstein Distance"})
)
sns.swarmplot(data=data, x="Graph Model", y="Wasserstein Distance")
Explanation: Notice that because the graphs are instantiated in a non-deterministic fashion, re-running the cell above will give you different values for each new graph generated.
Let's now plot the wasserstein distance to our graph data for the two particular Erdos-Renyi and Barabasi-Albert graph models shown above.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.