Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
11,800 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
================================================================
Compute sparse inverse solution with mixed norm
Step1: Run solver
Step2: Plot dipole activations
Step3: Plot residual
Step4: Generate stc from dipoles
Step5: View in 2D and 3D ("glass" brain like 3D plot)
Step6: Morph onto fsaverage brain and view | Python Code:
# Author: Alexandre Gramfort <[email protected]>
# Daniel Strohmeier <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.inverse_sparse import mixed_norm, make_stc_from_dipoles
from mne.minimum_norm import make_inverse_operator, apply_inverse
from mne.viz import (plot_sparse_source_estimates,
plot_dipole_locations, plot_dipole_amplitudes)
print(__doc__)
data_path = sample.data_path()
fwd_fname = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
ave_fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
cov_fname = data_path + '/MEG/sample/sample_audvis-shrunk-cov.fif'
subjects_dir = data_path + '/subjects'
# Read noise covariance matrix
cov = mne.read_cov(cov_fname)
# Handling average file
condition = 'Left Auditory'
evoked = mne.read_evokeds(ave_fname, condition=condition, baseline=(None, 0))
evoked.crop(tmin=0, tmax=0.3)
# Handling forward solution
forward = mne.read_forward_solution(fwd_fname)
Explanation: ================================================================
Compute sparse inverse solution with mixed norm: MxNE and irMxNE
================================================================
Runs an (ir)MxNE (L1/L2 [1] or L0.5/L2 [2] mixed norm) inverse solver.
L0.5/L2 is done with irMxNE which allows for sparser
source estimates with less amplitude bias due to the non-convexity
of the L0.5/L2 mixed norm penalty.
References
.. [1] Gramfort A., Kowalski M. and Hamalainen, M.
"Mixed-norm estimates for the M/EEG inverse problem using accelerated
gradient methods", Physics in Medicine and Biology, 2012.
https://doi.org/10.1088/0031-9155/57/7/1937.
.. [2] Strohmeier D., Haueisen J., and Gramfort A.
"Improved MEG/EEG source localization with reweighted mixed-norms",
4th International Workshop on Pattern Recognition in Neuroimaging,
Tuebingen, 2014. 10.1109/PRNI.2014.6858545
End of explanation
alpha = 55 # regularization parameter between 0 and 100 (100 is high)
loose, depth = 0.2, 0.9 # loose orientation & depth weighting
n_mxne_iter = 10 # if > 1 use L0.5/L2 reweighted mixed norm solver
# if n_mxne_iter > 1 dSPM weighting can be avoided.
# Compute dSPM solution to be used as weights in MxNE
inverse_operator = make_inverse_operator(evoked.info, forward, cov,
depth=depth, fixed=True,
use_cps=True)
stc_dspm = apply_inverse(evoked, inverse_operator, lambda2=1. / 9.,
method='dSPM')
# Compute (ir)MxNE inverse solution with dipole output
dipoles, residual = mixed_norm(
evoked, forward, cov, alpha, loose=loose, depth=depth, maxit=3000,
tol=1e-4, active_set_size=10, debias=True, weights=stc_dspm,
weights_min=8., n_mxne_iter=n_mxne_iter, return_residual=True,
return_as_dipoles=True)
Explanation: Run solver
End of explanation
plot_dipole_amplitudes(dipoles)
# Plot dipole location of the strongest dipole with MRI slices
idx = np.argmax([np.max(np.abs(dip.amplitude)) for dip in dipoles])
plot_dipole_locations(dipoles[idx], forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
# Plot dipole locations of all dipoles with MRI slices
for dip in dipoles:
plot_dipole_locations(dip, forward['mri_head_t'], 'sample',
subjects_dir=subjects_dir, mode='orthoview',
idx='amplitude')
Explanation: Plot dipole activations
End of explanation
ylim = dict(eeg=[-10, 10], grad=[-400, 400], mag=[-600, 600])
evoked.pick_types(meg=True, eeg=True, exclude='bads')
evoked.plot(ylim=ylim, proj=True, time_unit='s')
residual.pick_types(meg=True, eeg=True, exclude='bads')
residual.plot(ylim=ylim, proj=True, time_unit='s')
Explanation: Plot residual
End of explanation
stc = make_stc_from_dipoles(dipoles, forward['src'])
Explanation: Generate stc from dipoles
End of explanation
solver = "MxNE" if n_mxne_iter == 1 else "irMxNE"
plot_sparse_source_estimates(forward['src'], stc, bgcolor=(1, 1, 1),
fig_name="%s (cond %s)" % (solver, condition),
opacity=0.1)
Explanation: View in 2D and 3D ("glass" brain like 3D plot)
End of explanation
morph = mne.compute_source_morph(stc, subject_from='sample',
subject_to='fsaverage', spacing=None,
sparse=True, subjects_dir=subjects_dir)
stc_fsaverage = morph.apply(stc)
src_fsaverage_fname = subjects_dir + '/fsaverage/bem/fsaverage-ico-5-src.fif'
src_fsaverage = mne.read_source_spaces(src_fsaverage_fname)
plot_sparse_source_estimates(src_fsaverage, stc_fsaverage, bgcolor=(1, 1, 1),
fig_name="Morphed %s (cond %s)" % (solver,
condition), opacity=0.1)
Explanation: Morph onto fsaverage brain and view
End of explanation |
11,801 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Graphs and visualization
A lot of the joy of digital humanities comes in handling our material in new ways, so that we see things we wouldn't have seen before. Quite literally.
Some of the most useful tools for DH work are graphing tools! Today we will look at the basics of what a graph is and how you might build one, both manually and programmatically, and then name the tools to look into if you want to know more.
Tools you'll need
Graphviz
Step1: Then you do this every time you want to make a graph. The -f svg says that it should make an SVG image, which is what I recommend.
Step2: Now maybe Tom has a friend too
Step3: ...And so on. But what if we do want a nice symmetrical undirected graph? That is even simpler. Instead of digraph we say graph, and instead of describing the connections with -> we use -- instead. If we have a model like Facebook where friendship is always two-way, we can do this
Step4: Of course, this would hardly be fun if we couldn't do it programmatically!
Building graphs with Graphviz + Python
Now we are going to make a few graphs, not by writing out dot, but by making a graph object that holds our nodes and edges. We do this with the graphviz module.
Step5: We make a new directed graph with graphviz.Digraph(), and a new undirected graph with graphviz.Graph().
Step6: Let's make a social network graph of five friends, all of whom like each other. But instead of typing out all those
Anna -> Ben
sorts of lines, we will let the program do that for us.
Step7: And here is a little iPython magic function so that we can actually make the graph display right here in the notebook. This means that, instead of copy-pasting what you see above into a new cell, you can just ask IPython to do the copy-pasting for you!
Don't worry too much about understanding this (unless you want to!) but we will use it a little farther down. You can ignore the lines about "Couldn't evaluate or find in history" - that seems to be a little IPython bug!
Step8: Basic usage for the Graphviz python library
So here is a short summary of what we did above that you will want to remember
Step9: Labels and IDs
When you are making a graph, it is important that every node be unique - if you have two people named Tom, then the graph program will have no idea which Tom is friends with Anna. So how do you handle having two people named Tom, without resorting to last names or AHV numbers or something like that?
You use attributes in the graph, and specifically the label attribute. It looks something like this
Step10: Notice, in this, that Anna still popped into existence when we referred to her in a relationship. But in the real world, we will probably want to declare our nodes with (for example) student numbers as the unique identifier, and names for display in the graph.
Styling the graph
We can also define how the graph looks, or how nodes look, or how edges look, through attributes!
digraph G {
node [ shape="plaintext" fontcolor="red" ] # Define the look of all nodes.
mother [ label="Tara" ]
father [ label="Mike" ]
child [ label="Sophie" ]
work [ shape="house" color="black" fillcolor="yellow" ]
school [ shape="house" color="black" fillcolor="green" ]
mother -> child [ label="is mother" ]
father -> child [ label="is father" ]
mother -> work [ label="goes to" ]
father -> work [ label="goes to" ]
child -> school [ label="goes to" ]
}
And here's how we do that in python, and what we get... | Python Code:
# This is how you get the %%dot command that we use below.
%load_ext hierarchymagic
Explanation: Graphs and visualization
A lot of the joy of digital humanities comes in handling our material in new ways, so that we see things we wouldn't have seen before. Quite literally.
Some of the most useful tools for DH work are graphing tools! Today we will look at the basics of what a graph is and how you might build one, both manually and programmatically, and then name the tools to look into if you want to know more.
Tools you'll need
Graphviz: http://www.graphviz.org
The graphviz and sphinx modules for Python: pip install graphviz sphinx
So what can you do with graphs?
<img src="http://www.scottbot.net/HIAL/wp-content/uploads/2013/11/DH2014Keywords.png" width="90%">
You can visualize relationships, networks, you name it.
http://ckcc.huygens.knaw.nl/epistolarium/#
The DOT graph language
It's pretty easy to start building a graph, if you have the tools and a plain text editor. First you have to decide whether you want a directed or an undirected graph. If all the relationships you want to chart are symmetric and two-way (e.g. "these words appear together" or "these people corresponded", then it can be undirected. But if there is any asymmetry (e.g. in social networks - just because Tom is friends with Jane doesn't mean that Jane is friends with Tom!) then you want a directed graph.
If you want to make a directed graph, it looks like this:
digraph "My graph" {
[... graph data goes here ...]
}
and if you want to make an undirected graph, it looks like this.
graph "My graph" {
[... graph data goes here ...]
}
Let's say we want to make that little two-person social network. In graph terms, you have nodes and edges. The edges are the relationships, and the nodes are the things (people, places, dogs, cats, whatever) that are related. The easiest way to express that is like this:
digraph "My graph" {
Tom -> Jane
}
which says "The node Tom is connected to the node Jane, in that direction." We plug that into Graphviz, and what do we get? Let's use a little iPython magic to find out.
We are going to use an extension called 'hierarchymagic', which gives us the special %%dot command. You can get the extension by running this commands in a terminal window:
ipython -c '%install_ext http://students.digihum.ch/hierarchymagic.py'
Once that is done, here is how it works.
End of explanation
%%dot -f svg
digraph "My graph" {
Tom -> Jane
}
Explanation: Then you do this every time you want to make a graph. The -f svg says that it should make an SVG image, which is what I recommend.
End of explanation
%%dot -f svg
digraph "My graph" {
Tom -> Jane
Ben -> Tom
Tom -> Ben
}
Explanation: Now maybe Tom has a friend too:
digraph "My graph" {
Tom -> Jane
Ben -> Tom
Tom -> Ben
}
End of explanation
%%dot -f svg
graph "My graph" {
layout=twopi
Tom -- Jane
Ben -- Tom
}
%%dot -f svg
graph "My graph" {
layout=fdp
Tom -- Jane
Ben -- Tom
}
Explanation: ...And so on. But what if we do want a nice symmetrical undirected graph? That is even simpler. Instead of digraph we say graph, and instead of describing the connections with -> we use -- instead. If we have a model like Facebook where friendship is always two-way, we can do this:
graph "My graph" {
Tom -- Jane
Ben -- Tom
}
Note that we don't need the third line (Tom -- Ben) because it is now the same as saying Ben -- Tom.
Since this is an undirected graph, we want it to be laid out a little differently (not just straight up-and-down.) For this we can specify a different program with this -- -K flag. The options are dot (the default), neato, twopi, circo, fdp, and sfdp; they all take different approaches and you are welcome to play around with each one.
End of explanation
import graphviz # Use the Python graphviz library
Explanation: Of course, this would hardly be fun if we couldn't do it programmatically!
Building graphs with Graphviz + Python
Now we are going to make a few graphs, not by writing out dot, but by making a graph object that holds our nodes and edges. We do this with the graphviz module.
End of explanation
my_graph = graphviz.Digraph()
Explanation: We make a new directed graph with graphviz.Digraph(), and a new undirected graph with graphviz.Graph().
End of explanation
# Our list of friends
all_friends = [ 'Jane', 'Ben', 'Tom', 'Anna', 'Charlotte' ]
# Make them all friends with each other.
# As long as there are at least two people left in the list of friends...
while len( all_friends ) > 1:
this_friend = all_friends.pop() # Remove the last name from the list
for friend in all_friends: # Cycle through whoever is left and make them friends with each other
my_graph.edge( this_friend, friend ) # I like you
my_graph.edge( friend, this_friend ) # You like me
# Spit out the graph in its DOT format
print(my_graph.source)
Explanation: Let's make a social network graph of five friends, all of whom like each other. But instead of typing out all those
Anna -> Ben
sorts of lines, we will let the program do that for us.
End of explanation
## Here is the function we need
def make_dotcell( thegraph, format="svg" ):
cell_content = "%%dot " + "-f %s\n%s" % (format, thegraph.source)
return cell_content
## ...and here is how to use it. This will make a new cell that you can then 'play' to get the graph.
%recall make_dotcell( my_graph )
%%dot -f svg
digraph {
Charlotte -> Jane
Jane -> Charlotte
Charlotte -> Ben
Ben -> Charlotte
Charlotte -> Tom
Tom -> Charlotte
Charlotte -> Anna
Anna -> Charlotte
Anna -> Jane
Jane -> Anna
Anna -> Ben
Ben -> Anna
Anna -> Tom
Tom -> Anna
Tom -> Jane
Jane -> Tom
Tom -> Ben
Ben -> Tom
Ben -> Jane
Jane -> Ben
}
Explanation: And here is a little iPython magic function so that we can actually make the graph display right here in the notebook. This means that, instead of copy-pasting what you see above into a new cell, you can just ask IPython to do the copy-pasting for you!
Don't worry too much about understanding this (unless you want to!) but we will use it a little farther down. You can ignore the lines about "Couldn't evaluate or find in history" - that seems to be a little IPython bug!
End of explanation
import graphviz;
this_graph = graphviz.Digraph() # start your directed graph
this_undirected = graphviz.Graph() # ...or your undirected graph
this_graph.edge( "me", "you" ) # Add a relationship between me and you
this_undirected.edge( "me", "you" )
print(this_graph.source) # Print out the dot.
print(this_undirected.source)
Explanation: Basic usage for the Graphviz python library
So here is a short summary of what we did above that you will want to remember:
End of explanation
lg = graphviz.Graph() # Make this one undirected
lg.graph_attr['layout'] = 'neato'
lg.node( "Tom1", label="Tom" )
lg.node( "Tom2", label="Tom" )
lg.edge( "Tom1", "Anna", label="siblings" )
lg.edge( "Tom1", "Tom2", label="friends" )
%recall make_dotcell(lg)
%%dot -f svg
graph {
graph [layout=neato]
Tom1 [label=Tom]
Tom2 [label=Tom]
Tom1 -- Anna [label=siblings]
Tom1 -- Tom2 [label=friends]
}
Explanation: Labels and IDs
When you are making a graph, it is important that every node be unique - if you have two people named Tom, then the graph program will have no idea which Tom is friends with Anna. So how do you handle having two people named Tom, without resorting to last names or AHV numbers or something like that?
You use attributes in the graph, and specifically the label attribute. It looks something like this:
graph G {
Tom1 [ label="Tom" ]
Tom2 [ label="Tom" ]
Tom1 -- Anna
Tom1 -- Tom2
}
Before this, we only named our nodes when we needed them to define a relationship (an edge). But if we need to give any extra information about a node, such as a label, then we have to list it first, on its own line, with the extra information between the square brackets.
There are a whole lot of options for things you might want to define! Most of them have to do with how the graph should look, and we will look at them in a minute. For now, this is what we get for this graph:
End of explanation
family = graphviz.Digraph()
family.node_attr = {'shape': "plaintext", 'fontcolor': "red" }
family.node( "mother", label="Tara" )
family.node( "father", label="Mike" )
family.node( "child", label="Sophie" )
family.node( "work", shape="house", fontcolor="black", color="blue" )
family.node( "school", shape="house", fontcolor="black", color="green" )
family.edge( "mother", "child", label="is mother" )
family.edge( "father", "child", label="is father" )
family.edge( "mother", "work", label="goes to" )
family.edge( "father", "work", label="goes to" )
family.edge( "child", "school", label="goes to" )
%recall make_dotcell( family )
%%dot -f svg
digraph {
node [fontcolor=red shape=plaintext]
mother [label=Tara]
father [label=Mike]
child [label=Sophie]
work [color=blue fontcolor=black shape=house]
school [color=green fontcolor=black shape=house]
mother -> child [label="is mother"]
father -> child [label="is father"]
mother -> work [label="goes to"]
father -> work [label="goes to"]
child -> school [label="goes to"]
}
Explanation: Notice, in this, that Anna still popped into existence when we referred to her in a relationship. But in the real world, we will probably want to declare our nodes with (for example) student numbers as the unique identifier, and names for display in the graph.
Styling the graph
We can also define how the graph looks, or how nodes look, or how edges look, through attributes!
digraph G {
node [ shape="plaintext" fontcolor="red" ] # Define the look of all nodes.
mother [ label="Tara" ]
father [ label="Mike" ]
child [ label="Sophie" ]
work [ shape="house" color="black" fillcolor="yellow" ]
school [ shape="house" color="black" fillcolor="green" ]
mother -> child [ label="is mother" ]
father -> child [ label="is father" ]
mother -> work [ label="goes to" ]
father -> work [ label="goes to" ]
child -> school [ label="goes to" ]
}
And here's how we do that in python, and what we get...
End of explanation |
11,802 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
O$_2$scl table example for O$_2$sclpy
See the O$_2$sclpy documentation at
https
Step1: Link the o2scl library
Step2: Create an HDF5 file object and open the table in O$_2$scl's data file for the Akmal, Pandharipande, and Ravenhall equation of state. The open() function for the hdf_file class is documented here.
Step3: We create a table object and specify a blank name to indicate
that we just want to read the first table in the file.
Step4: Read the table
Step5: Close the HDF5 file.
Step6: We use the cap_cout class to capture std
Step7: Finally, we use matplotlib to plot the data stored in the table | Python Code:
import o2sclpy
import matplotlib.pyplot as plot
import sys
plots=True
if 'pytest' in sys.modules:
plots=False
Explanation: O$_2$scl table example for O$_2$sclpy
See the O$_2$sclpy documentation at
https://neutronstars.utk.edu/code/o2sclpy for more information.
End of explanation
link=o2sclpy.linker()
link.link_o2scl()
Explanation: Link the o2scl library:
End of explanation
hf=o2sclpy.hdf_file(link)
hf.open(link.o2scl_settings.get_data_dir()+b'apr98.o2')
Explanation: Create an HDF5 file object and open the table in O$_2$scl's data file for the Akmal, Pandharipande, and Ravenhall equation of state. The open() function for the hdf_file class is documented here.
End of explanation
tab=o2sclpy.table(link)
name=b''
Explanation: We create a table object and specify a blank name to indicate
that we just want to read the first table in the file.
End of explanation
o2sclpy.hdf_input_table(link,hf,tab,name)
Explanation: Read the table:
End of explanation
hf.close()
Explanation: Close the HDF5 file.
End of explanation
cc=o2sclpy.cap_cout()
cc.open()
tab.summary()
cc.close()
Explanation: We use the cap_cout class to capture std::cout to the Jupyter notebook. The summary() function lists the columns in the table.
End of explanation
if plots:
plot.plot(tab['rho'],tab['nuc'])
plot.plot(tab['rho'],tab['neut'])
plot.show()
Explanation: Finally, we use matplotlib to plot the data stored in the table:
End of explanation |
11,803 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EXP 3-NYCtaxi
In this experiment we use the NYC taxi dataset. We pass it through a scalar encoder, spatial pooler and then to the TM. We do a single pass to the TM and keep track of the spike trains of each cell. We use this data to estimate pairwise correlations among cells.
Step1: Part I. Encoder
Step2: Part II. Spatial Pooler
Step3: Part III. Temporal Memory
Step4: Part IV. Analysis of Spike Trains
Step5: Raster plots
Step6: Part V. Save TM
Step7: Part VI. Analysis of Input | Python Code:
import numpy as np
import random
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import pandas as pd
from nupic.encoders import ScalarEncoder
from nupic.bindings.algorithms import TemporalMemory as TM
from nupic.bindings.algorithms import SpatialPooler as SP
from htmresearch.support.neural_correlations_utils import *
random.seed(1)
inputSize = 109
maxItems = 17520
tmEpochs = 1
totalTS = maxItems * tmEpochs
# read csv file
df = pd.read_csv('nyc_taxi.csv', skiprows=[1, 2])
tm = TM(columnDimensions = (2048,),
cellsPerColumn=8, # We changed here the number of cells per col, initially they were 32
initialPermanence=0.21,
connectedPermanence=0.3,
minThreshold=15,
maxNewSynapseCount=40,
permanenceIncrement=0.1,
permanenceDecrement=0.1,
activationThreshold=15,
predictedSegmentDecrement=0.01
)
sparsity = 0.02
sparseCols = int(tm.numberOfColumns() * sparsity)
sp = SP(inputDimensions=(inputSize,),
columnDimensions=(2048,),
potentialRadius = int(0.5*inputSize),
numActiveColumnsPerInhArea = sparseCols,
potentialPct = 0.9,
globalInhibition = True,
synPermActiveInc = 0.0001,
synPermInactiveDec = 0.0005,
synPermConnected = 0.5,
boostStrength = 0.0,
spVerbosity = 1
)
Explanation: EXP 3-NYCtaxi
In this experiment we use the NYC taxi dataset. We pass it through a scalar encoder, spatial pooler and then to the TM. We do a single pass to the TM and keep track of the spike trains of each cell. We use this data to estimate pairwise correlations among cells.
End of explanation
rawValues = []
remainingRows = maxItems
numTrainingItems = 15000
trainSet = []
nonTrainSet = []
se = ScalarEncoder(n=109, w=29, minval=0, maxval=40000, clipInput=True)
s = 0
for index, row in df.iterrows():
if s > 0 and s % 500 == 0:
print str(s) + " items processed"
rawValues.append(row['passenger_count'])
if s < numTrainingItems:
trainSet.append(se.encode(row['passenger_count']))
else:
nonTrainSet.append(se.encode(row['passenger_count']))
remainingRows -= 1
s += 1
if remainingRows == 0:
break
print "*** All items encoded! ***"
Explanation: Part I. Encoder
End of explanation
allSequences = []
outputColumns = np.zeros(sp.getNumColumns(), dtype="uint32")
columnUsage = np.zeros(sp.getNumColumns(), dtype="uint32")
# Set epochs for spatial-pooling:
spEpochs = 4
for epoch in range(spEpochs):
print "Training epoch: " + str(epoch)
#randomize records in training set
randomIndex = np.random.permutation(np.arange(numTrainingItems))
for i in range(numTrainingItems):
sp.compute(trainSet[randomIndex[i]], True, outputColumns)
# Populate array for Yuwei plot:
for col in outputColumns.nonzero():
columnUsage[col] += 1
if epoch == (spEpochs - 1):
allSequences.append(outputColumns.nonzero())
for i in range(maxItems - numTrainingItems):
if i > 0 and i % 500 == 0:
print str(i) + " items processed"
sp.compute(nonTrainSet[i], False, outputColumns)
allSequences.append(outputColumns.nonzero())
# Populate array for Yuwei plot:
for col in outputColumns.nonzero():
columnUsage[col] += 1
print "*** All items processed! ***"
bins = 50
plt.hist(columnUsage, bins)
plt.xlabel("Number of times active")
plt.ylabel("Number of columns")
plt.savefig("columnUsage_SP")
plt.close()
Explanation: Part II. Spatial Pooler
End of explanation
spikeTrains = np.zeros((tm.numberOfCells(), totalTS), dtype = "uint32")
columnUsage = np.zeros(tm.numberOfColumns(), dtype="uint32")
spikeCount = np.zeros(totalTS, dtype="uint32")
ts = 0
entropyX = []
entropyY = []
negPCCX_cells = []
negPCCY_cells = []
numSpikesX = []
numSpikesY = []
numSpikes = 0
negPCCX_cols = []
negPCCY_cols = []
traceX = []
traceY = []
# Randomly generate the indices of the columns to keep track during simulation time
# keep track of 125 columns = 1000 cells
#colIndicesLarge = np.random.permutation(tm.numberOfColumns())[0:125]
for e in range(tmEpochs):
print ""
print "Epoch: " + str(e)
for s in range(maxItems):
if s % 1000 == 0:
print str(s) + " items processed"
tm.compute(allSequences[s][0].tolist(), learn=True)
for cell in tm.getActiveCells():
spikeTrains[cell, ts] = 1
numSpikes += 1
spikeCount[ts] += 1
# Obtain active columns:
activeColumnsIndices = [tm.columnForCell(i) for i in tm.getActiveCells()]
currentColumns = [1 if i in activeColumnsIndices else 0 for i in range(tm.numberOfColumns())]
for col in np.nonzero(currentColumns)[0]:
columnUsage[col] += 1
if ts > 0 and ts % int(totalTS * 0.1) == 0:
numSpikesX.append(ts)
numSpikesY.append(numSpikes)
numSpikes = 0
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), ts, 1000)
(corrMatrix, numNegPCC) = computePWCorrelations(subSpikeTrains, removeAutoCorr=True)
negPCCX_cells.append(ts)
negPCCY_cells.append(numNegPCC)
bins = 300
plt.hist(corrMatrix.ravel(), bins, alpha=0.5)
plt.xlim(-0.1,0.2)
plt.xlabel("PCC")
plt.ylabel("Frequency")
plt.savefig("cellsHist" + str(ts))
plt.close()
traceX.append(ts)
#traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.5))
#traceY.append(np.std(corrMatrix))
#traceY.append(sum(1 for i in corrMatrix.ravel() if i > -0.05 and i < 0.1))
traceY.append(sum(1 for i in corrMatrix.ravel() if i > 0.0))
entropyX.append(ts)
entropyY.append(computeEntropy(subSpikeTrains))
#print "++ Analyzing correlations (whole columns) ++"
### First the LARGE subsample of columns:
colIndicesLarge = np.random.permutation(tm.numberOfColumns())[0:125]
subSpikeTrains = subSampleWholeColumn(spikeTrains, colIndicesLarge, tm.getCellsPerColumn(), ts, 1000)
(corrMatrix, numNegPCC) = computePWCorrelationsWithinCol(subSpikeTrains, True, tm.getCellsPerColumn())
negPCCX_cols.append(ts)
negPCCY_cols.append(numNegPCC)
#print "++ Generating histogram ++"
plt.hist(corrMatrix.ravel(), alpha=0.5)
plt.xlabel("PCC")
plt.ylabel("Frequency")
plt.savefig("colsHist_" + str(ts))
plt.close()
ts += 1
print "*** DONE ***"
# end for-epochs
plt.plot(traceX, traceY)
plt.xlabel("Time")
plt.ylabel("Positive PCC Count")
plt.savefig("positivePCCTrace")
plt.close()
sparsityTraceX = []
sparsityTraceY = []
for i in range(totalTS - 1000):
sparsityTraceX.append(i)
sparsityTraceY.append(np.mean(spikeCount[i:1000 + i]) / tm.numberOfCells())
plt.plot(sparsityTraceX, sparsityTraceY)
plt.xlabel("Time")
plt.ylabel("Sparsity")
plt.savefig("sparsityTrace")
plt.close()
# plot trace of negative PCCs
plt.plot(negPCCX_cells, negPCCY_cells)
plt.xlabel("Time")
plt.ylabel("Negative PCC Count")
plt.savefig("negPCCTrace_cells")
plt.close()
plt.plot(negPCCX_cols, negPCCY_cols)
plt.xlabel("Time")
plt.ylabel("Negative PCC Count")
plt.savefig("negPCCTrace_cols")
plt.close()
# print computeEntropy()
plt.plot(entropyX, entropyY)
plt.xlabel("Time")
plt.ylabel("Entropy")
plt.savefig("entropyTM")
plt.close()
bins = 50
plt.hist(columnUsage, bins)
plt.xlabel("Number of times active")
plt.ylabel("Number of columns")
plt.savefig("columnUsage_TM")
plt.close()
plt.plot(numSpikesX, numSpikesY)
plt.xlabel("Time")
plt.ylabel("Num Spikes")
plt.savefig("numSpikesTrace")
plt.close()
Explanation: Part III. Temporal Memory
End of explanation
simpleAccuracyTest("periodic", tm, allSequences)
subSpikeTrains = subSample(spikeTrains, 1000, tm.numberOfCells(), 0, 0)
isi = computeISI(subSpikeTrains)
#bins = np.linspace(np.min(isi), np.max(isi), 50)
bins = 100
plt.hist(isi, bins)
# plt.xlim(0,4000)
# plt.xlim(89500,92000)
plt.xlabel("ISI")
plt.ylabel("Frequency")
plt.savefig("isiTM")
plt.close()
print np.mean(isi)
print np.std(isi)
print np.std(isi)/np.mean(isi)
Explanation: Part IV. Analysis of Spike Trains
End of explanation
subSpikeTrains = subSample(spikeTrains, 100, tm.numberOfCells(), -1, 1000)
rasterPlot(subSpikeTrains, "TM")
Explanation: Raster plots
End of explanation
saveTM(tm)
# to load the TM back from the file do:
with open('tm.nta', 'rb') as f:
proto2 = TemporalMemoryProto_capnp.TemporalMemoryProto.read(f, traversal_limit_in_words=2**61)
tm = TM.read(proto2)
Explanation: Part V. Save TM
End of explanation
overlapMatrix = inputAnalysis(allSequences, "periodic", tm.numberOfColumns())
# show heatmap of overlap matrix
plt.imshow(overlapMatrix, cmap='spectral', interpolation='nearest')
cb = plt.colorbar()
cb.set_label('Overlap Score')
plt.savefig("overlapScore_heatmap")
plt.close()
# plt.show()
# generate histogram
bins = 60
(n, bins, patches) = plt.hist(overlapMatrix.ravel(), bins, alpha=0.5)
plt.xlabel("Overlap Score")
plt.ylabel("Frequency")
plt.savefig("overlapScore_hist")
plt.xlim(0.2,1)
plt.ylim(0,1000000)
plt.xlabel("Overlap Score")
plt.ylabel("Frequency")
plt.savefig("overlapScore_hist_ZOOM")
plt.close()
Explanation: Part VI. Analysis of Input
End of explanation |
11,804 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create the input required for a full TranSiesta calculation. Here you should create your first system with these settings
Step1: Create the electrode.
Supply the coordinates and the supercell.
Note that you have to decide the semi-infinite
direction by ensuring the electrode to
be periodic in the semi-infinite direction. You should also ensure nearest neighbour electrode couplings only!
Step2: Create the device.
The basic unit-cell for a fully periodic device
system is 3-times the electrode size.
HINT | Python Code:
C = sisl.Atom(6)
Explanation: Create the input required for a full TranSiesta calculation. Here you should create your first system with these settings:
A pristine bulk Carbon-chain system.
The Carbon-chain should have a bond-length of $1.5\,\mathrm{Ang}$ and lots of vacuum in the transverse directions (to make it a chain)
You should decide in which direction the semi-infinite directions are.
Please use the script tselecs.sh to create the relevant input for TranSiesta.
Below you will find a skeleton code that only requires editing from your side.
End of explanation
elec = sisl.Geometry(<fill-in coordinates>,
atoms=C,
sc=<unit-cell size>)
elec.write('ELEC.fdf')
Explanation: Create the electrode.
Supply the coordinates and the supercell.
Note that you have to decide the semi-infinite
direction by ensuring the electrode to
be periodic in the semi-infinite direction. You should also ensure nearest neighbour electrode couplings only!
End of explanation
device = elec.tile(3, axis=<semi-infinite direction>)
device.write('DEVICE.fdf')
Explanation: Create the device.
The basic unit-cell for a fully periodic device
system is 3-times the electrode size.
HINT: If you are not fully sure why this is, please see TS 1.
End of explanation |
11,805 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
15/10
Características del Python. Lenguajes compilados vs interpretados. Tipado estático vs dinámico. Fuertemente tipado vs débilmente tipado.
Variables. Operaciones Elementales. Tipos de Datos atómicos. Estructuras de Control Selectivas
Python
Python es un lenguaje de programación interpretado, fuertemente tipado, de tipado dinámico
Compilado vs Interpretado
Existen infinitas clasificaciones para los lenguajes de programación, entre ellas una que los distingue en función de cuándo requieren de una aplicación para que la computadora entienda el código escrito por el desarrollador
Step1: Ahora, ¿qué pasa cuando ese número entero crece mucho?, por ejemplo, si le asignamos 9223372036854775807
Step2: ¿Y si ahora le sumamos 1?
Step3: Reales
Step4: ¿Y qué pasa si a un entero le sumamos un real?
Step5: Operaciones entre reales y enteros
¿Y si dividimos dos números enteros?, ¿dará un número real?
Step6: En cambio, si alguno de los números es real
Step7: ¿Y si queremos hacer la división entera por más que uno de los números sea real?
Step8: Esto cambia en Python 3, donde la / hace la división real (por más que le pases dos números enteros) y la // hace la división entera.
Complejos
Step9: Si bien Python soporta aritmética de complejos, la verdad es que no es uno de los tipos de datos más usados. Sin embargo, es bueno saber que existe.
Booleanos
Python también soporta el tipo de dato booleano
Step10: También se puede crear un boolean a partir de comparar dos números
Step11: Incluso, se puede saber fácilmente si un número está dentro de un rango o no.
Step12: Muchas formas de imprimir el número 5
Step13: Strings
En python los strings se pueden armar tanto con comillas simples (') como dobles ("), lo que no se puede hacer es abrir con unas y cerrar con otras.
Step15: Además, se pueden armar strings multilínea poniendo tres comillas simples o dobles seguidas
Step16: Índices y Slice en string
Si queremos obtener un caracter del string podemos acceder a él simplemente con poner entre corchetes su posición (comenzando con la 0)
Step17: H | o | l | a | | m | u | n | d | o
-------| ---------- | ------------
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Aunque también nos podemos referir a ese caracter comenzando por su posición, pero comenzando a contar desde la última posición (comenzando en 1)
Step18: H | o | l | a | | m | u | n | d | o
-------| ---------- | ------------
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
-10 | -9 | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1
Lo que no se puede hacer es cambiar sólo una letra de un string
Step19: Aunque a veces lo que queremos es una parte del string, no todo
Step20: Aunque lo más común es quitar el último carácter, por ejemplo, cuando es un Enter
Step21: Ingreso de datos desde teclado
Step22: Y para convertirlo como entero
Step23: None
None es el tipo de dato nulo que sólo puede tomar un valor
Step24: if-else
Step25: if-elif-else
Ahora si queremos imprimir si un número es igual, menor o mayor a otro tendríamos que usar if anidados en Pascal o C; y no queda del todo claro
Step26: En cambio, en Python lo podemos un poco más compacto y claro
Step27: Como ya dijimos antes, se puede usar el operador in para saber si un elemento se encuentra en una lista
Step28: Cualquier tipo de dato se lo puede evaluar como booleano.
Se toma como falso a
Step29: short-if
Otra forma de escribir el if en una sola línea es poner | Python Code:
numero_entero = 5 # Asigno el número 5 a la variable numero_entero
print numero_entero # Imprimo el valor que tiene la variable numero_entero
print type(numero_entero) # Imprimo el tipo de la variable numero_entero
Explanation: 15/10
Características del Python. Lenguajes compilados vs interpretados. Tipado estático vs dinámico. Fuertemente tipado vs débilmente tipado.
Variables. Operaciones Elementales. Tipos de Datos atómicos. Estructuras de Control Selectivas
Python
Python es un lenguaje de programación interpretado, fuertemente tipado, de tipado dinámico
Compilado vs Interpretado
Existen infinitas clasificaciones para los lenguajes de programación, entre ellas una que los distingue en función de cuándo requieren de una aplicación para que la computadora entienda el código escrito por el desarrollador:
* Compilador: * programa que lee todo el código en un instante y lo traduce al lenguaje binario que es el que entiende la computadora. Sería el equivalente a traducir un libro, cada frase se traduce una única vez y después se usa la frase traducida.
* Intérprete:*** programa que lee el código a medida que lo necesita y va traduciendo al binario, la computadora ejecuta la instrucción que el intérprete le pide, y descarta esa traducción; por lo que cuando vuelva a pasar por esa misma porción de código, deberá volver a traducirla. Es el equivalente a un intérprete en una conferencia, si el orador repite la misma frase más de una vez, éste tiene que volver a traducirla.
| Compilado | Interpretado
-------| ---------- | ------------
Velocidad de desarrollo|Lento, hay que compilar cada vez y es una tarea que puede tardar varios minutos| Rápido, se modifica el código y se vuelve a probar
Velocidad de ejecución| Rápido, se compila una única vez todo el programa y ejecutable generado ya lo entiende la máquina| Lento, es necesario leer, interpretar y traducir el código mientras el programa está corriendo
Dependencia de la plataforma|Si, siempre que se genera un ejecutable es para una arquitectura en particular|No, con instalar el intérprete en la computadora que se quiere correr el programa es suficiente
Dependencia del lenguaje|No, una vez que se compilo no es necesario instalar el compilador para ejecutar el programa|Si, siempre que se quiera correr el programa es necesario instalar el intérprete
Si bien Python es un lenguaje interpretado, en realidad se podría compilar el código y algo de eso hace sólo el intérprete cuando genera los archivos *.pyc.
Tipado estático vs tipado dinámico
Otra posible clasificación radica en si una variable puede cambiar el tipo de dato que se puede almacenar en ella entre una sentencia y la siguiente (tipado dinámico). O si en la etapa de definición se le asigna un tipo de dato a una variable y, por más que se puede cambiar el contenido de la misma, no cambie el tipo de dato de lo que se almacena en ella (tipado estático).
Fuertemente tipado vs débilmente tipado
Y por último, también podríamos clasificar los lenguajes en función de la posibilidad que nos brindan para mezclar distintos tipos de datos. <br>
Se dice que un lenguaje es fuertemente tipado cuando no se pueden mezclar dos variables de distinto tipo lanzando un error o excepción. <br>
Por el contrario, cuando se pueden mezclar dos variables de distinto tipo, realizar una operación entre ellas y obtener un resultado se dice que es un lenguaje débilmente tipado. <br>
Por ejemplo, en javascript (lenguaje débilmente tipado), si queremos sumar el string '1' con el número 2 dá como resultado el string '12', cuando en Python lanza una excepción al momento de ejecutar el código y, en Pascal, lanza un error al momento de querer compilar el código.
Declaración y definición de variables
En lenguajes como Pascal, la declaración y la definición de variables se encuentran en dos momentos distintos.
La declaración se dá dentro del bloque var y es donde el desarrollador le indica al compilador que va a necesitar una porción de memoria para almacenar algo de un tipo de dato en particular y va a referirse a esa porción de memoria con un nombre en particular. <br>
Por ejemplo:
Pascal
var
n : integer;
Se declara que existirá una variable llamada n y en ella se podrán guardar números enteros entre -32.768 y 32.767.
La definición de esa variable se dá en el momento en el que se le asigna un valor a esa variable. <br>
Por ejemplo:
Pascal
n := 5;
En Python, la declaración y definición de una variable se hacen el mismo momento:
Python
n = 5
n = 'Hola mundo'
En la primer línea se declara que se usará una variable llamada n, que almacenará un número entero y se la define asignándole el número 5. <br>
En la segunda línea, a esa variable de tipo entero se la "pisa" cambiándole el tipo a string y se le asigna la cadena de caracteres 'Hola mundo'
Objetivos y características
En 1989 Guido van Rossum era parte del equipo que desarrollaba Amoeba OS y se dió cuenta que muchos programadores al momento de tener que elegir un lenguaje para solucionar ciertos problemas se encontraban con que tenían dos alternativas, pero ninguna cerraba a la perfección:
* Bash: lenguaje de scripting (es el que usa la consola de linux como intérprete) y en este contexto se quedaba corto y complicaba la solución
* C: lenguaje estructurado con características de bajo, mediano y alto nivel; pero que en estas circunstancias era demasiado y era como matar un mosquito con cañón.
Ante esta situación, e influido por el lenguaje ABC del cual había participado, es que decidió crear Python como un lenguaje intermedio entre bash y C que tiene las siguientes características:
* Extensible (se le pueden agregar módulos en C y Python)
* Multiplataforma (Amoeba OS, Unix, Windows y Mac)
* Sintaxis simple, clara y sencilla
* Fuertemente tipado
* Tipado dinámico
* Gran librería estándar
* Introspección
Filosofia de Python
Dentro de lo que es el Zen de Python están escritas varias reglas que debería seguir todo código escrito en Python.
Algunas de ellas son:
* Bello es mejor que feo
* Explícito es mejor que implícito
* Simple es mejor que complejo
* Complejo es mejor que complicado
* La legibilidad cuenta
* Los casos especiales no son tan especiales como para quebrantar las reglas
* Aunque lo práctico le gana a la pureza
* Si la implementación es difícil de explicar, es una mala idea
Estructura de un programa en Python
La estructura de un programa en Python no es tan estricta como puede serlo en Pascal o en C/C++, ya que no debe comenzar con ninguna palabra reservada, ni con un procedimiento o función en particular. Simplemente con escribir un par de líneas de código ya podríamos decir que tenemos un programa en Python.
Lo que es importante destacar es la forma de identificar los distintos bloques de código. En Pascal se definía un bloque de código usando las palabras reservadas Begin y End; en C/C++ se define mediante el uso de las llaves ({ y }). Sin embargo, en Python, se utiliza la indentación; es decir, la cantidad de espacios/tabs que hay entre el comienzo de la línea y el primer carácter distinto a ellos.
Tipos de datos
En Python a las variables se les puede preguntar de qué tipo son usando la función type:
Python
tipo_de_la_variable =/ type(variable)
Enteros
Python 2 distingue dos tipos de enteros:
* int
* long
En Python 3 directamente existe un único tipo de entero, los int.
End of explanation
numero_muy_grande = 9223372036854775807
print numero_muy_grande
print type(numero_entero)
Explanation: Ahora, ¿qué pasa cuando ese número entero crece mucho?, por ejemplo, si le asignamos 9223372036854775807
End of explanation
numero_muy_grande += 1
print numero_muy_grande
print type(numero_muy_grande)
Explanation: ¿Y si ahora le sumamos 1?
End of explanation
numero_real = 7.5
print numero_real
print type(numero_real)
Explanation: Reales
End of explanation
print numero_entero + numero_real
print type(numero_entero + numero_real)
Explanation: ¿Y qué pasa si a un entero le sumamos un real?
End of explanation
dividendo = 5
divisor = 3
resultado = dividendo / divisor
print resultado
print type(resultado)
Explanation: Operaciones entre reales y enteros
¿Y si dividimos dos números enteros?, ¿dará un número real?
End of explanation
divisor = 3.0
resultado = dividendo / divisor
print resultado
print type(resultado)
Explanation: En cambio, si alguno de los números es real:
End of explanation
cociente = dividendo // divisor
print "cociente: ", cociente
print type(cociente)
resto = dividendo % divisor
print "resto: ", resto
print type(resto)
Explanation: ¿Y si queremos hacer la división entera por más que uno de los números sea real?
End of explanation
complejo = 5 + 3j
print complejo
print type(complejo)
complejo_cuadrado = complejo ** 2
print '(5+3j)*(5+3j) = 5*5 + 5*3j + 3j*5 + 3j*3j = (25-9) + 30j'
print complejo_cuadrado
Explanation: Esto cambia en Python 3, donde la / hace la división real (por más que le pases dos números enteros) y la // hace la división entera.
Complejos
End of explanation
boolean = True
print boolean
print not boolean
print type(boolean)
print True or False and True
Explanation: Si bien Python soporta aritmética de complejos, la verdad es que no es uno de los tipos de datos más usados. Sin embargo, es bueno saber que existe.
Booleanos
Python también soporta el tipo de dato booleano:
End of explanation
boolean = 5 < 7
print boolean
Explanation: También se puede crear un boolean a partir de comparar dos números:
End of explanation
numero = 7
en_rango = 5 < numero < 9
fuera_de_rango = 5 < numero < 6
print 'numero vale {0}'.format(numero)
print '5 < numero < 9: {pertenece}'.format(pertenece=en_rango)
print '5 < numero < 6: %s' % fuera_de_rango
Explanation: Incluso, se puede saber fácilmente si un número está dentro de un rango o no.
End of explanation
print " %d %i %s %ld %lu %0.4d %4d" % (5, 5, 5, 5, 5, 5, 5)
Explanation: Muchas formas de imprimir el número 5
End of explanation
cadena_caracteres = 'Hola mundo'
print cadena_caracteres
print type(cadena_caracteres)
cadena_caracteres = "Y con doble comilla?, de qué tipo es?"
print cadena_caracteres
print type(cadena_caracteres)
Explanation: Strings
En python los strings se pueden armar tanto con comillas simples (') como dobles ("), lo que no se puede hacer es abrir con unas y cerrar con otras.
End of explanation
cadena_caracteres = y si quiero
usar un string
que se escriba en varias
líneas?.
print cadena_caracteres
print type(cadena_caracteres)
Explanation: Además, se pueden armar strings multilínea poniendo tres comillas simples o dobles seguidas:
End of explanation
cadena_caracteres = 'Hola mundo'
print cadena_caracteres
print 'El septimo caracter de la cadena "{0}" es "{1}"'.format(cadena_caracteres, cadena_caracteres[6])
Explanation: Índices y Slice en string
Si queremos obtener un caracter del string podemos acceder a él simplemente con poner entre corchetes su posición (comenzando con la 0):
End of explanation
print 'El septimo caracter de la cadena "{0}" es "{1}"'.format(cadena_caracteres, cadena_caracteres[-4])
Explanation: H | o | l | a | | m | u | n | d | o
-------| ---------- | ------------
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
Aunque también nos podemos referir a ese caracter comenzando por su posición, pero comenzando a contar desde la última posición (comenzando en 1):
End of explanation
cadena_caracteres[6] = 'x'
Explanation: H | o | l | a | | m | u | n | d | o
-------| ---------- | ------------
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9
-10 | -9 | -8 | -7 | -6 | -5 | -4 | -3 | -2 | -1
Lo que no se puede hacer es cambiar sólo una letra de un string:
End of explanation
print cadena_caracteres[2:8] # Con los dos índices positivos
print cadena_caracteres[2:-2] # Con un índice negativo y otro positivo
print cadena_caracteres[-8:8] # Con un índice negativo y otro positivo
print cadena_caracteres[-8:-2] # Con ambos índices negativos
print cadena_caracteres[2:-2:3] # Y salteándose de a dos
Explanation: Aunque a veces lo que queremos es una parte del string, no todo:
End of explanation
cadena_caracteres = 'Hola mundo\n'
print cadena_caracteres
print cadena_caracteres[:-1]
print cadena_caracteres[:-5]
Explanation: Aunque lo más común es quitar el último carácter, por ejemplo, cuando es un Enter:
End of explanation
numero = raw_input('Ingrese un número')
print numero
print type(numero)
Explanation: Ingreso de datos desde teclado
End of explanation
numero = int(numero)
print numero
print type(numero)
Explanation: Y para convertirlo como entero:
End of explanation
numero1 = 1
numero2 = 2
if numero1 == numero2:
print 'Los números son iguales'
print 'Este string se imprime siempre'
print 'Ahora cambio el valor de numero2'
numero2 = 1
if numero1 == numero2:
print 'Los números son iguales'
print 'Este string se imprime siempre'
Explanation: None
None es el tipo de dato nulo que sólo puede tomar un valor: None. Aunque parezca que es muy inútil, en realidad se usa mucho.
Estructuras de control
Así como en Pascal se delimitan los bloques de código con las palabras reservadas begin y end, en Python se usan la indentación (espacios) para determinar qué se encuentra dentro de una estructura de control y qué no.
if
End of explanation
numero1 = 1
numero2 = 2
if numero1 == numero2:
print 'Los números son iguales'
print 132
print 23424
else:
print 'Los números son distintos'
Explanation: if-else
End of explanation
# Como lo tendríamos que hacer en Pascal o C.
if numero1 == numero2:
print 'Los dos números son iguales'
else:
if numero1 > numero2:
print 'numero1 es mayor a numero2'
else:
print 'numero1 es menor a numero2'
Explanation: if-elif-else
Ahora si queremos imprimir si un número es igual, menor o mayor a otro tendríamos que usar if anidados en Pascal o C; y no queda del todo claro:
End of explanation
# Más corto y elegante en Python.
if numero1 == numero2:
print 'Los dos números son iguales'
elif numero1 > numero2:
print 'numero1 es mayor a numero2'
else:
print 'numero1 es menor a numero2'
Explanation: En cambio, en Python lo podemos un poco más compacto y claro:
End of explanation
lista_de_numeros = []
if lista_de_numeros:
print 'la lista tiene elementos'
else:
print 'la lista no tiene elementos'
Explanation: Como ya dijimos antes, se puede usar el operador in para saber si un elemento se encuentra en una lista:
End of explanation
if lista_de_numeros:
print 'La lista no esta vacía'
if False or None or [] or () or {} or 0:
print 'Alguna de las anteriores no era falsa'
else:
print 'Todos los valores anteriores son consideradas como Falso'
Explanation: Cualquier tipo de dato se lo puede evaluar como booleano.
Se toma como falso a:
* None
* False para los bool
* cero para todo tipo de dato numérico: 0, 0L, 0.0, 0j
* vacío para cualquier secuencia o diccionario: '', (), [], {}
Por lo tanto, se puede saber si una lista esta vacía o no con simplemente:
End of explanation
num = 5
es_par = True if (num % 2 == 0) else False
print '5 es par?:', es_par
num = 6
es_par = True if (num % 2 == 0) else False
print '6 es par?:', es_par
nulo = None
print nulo
print type(nulo)
Explanation: short-if
Otra forma de escribir el if en una sola línea es poner:
Python
variable = valor1 if condicion else valor2
Por ejemplo:
End of explanation |
11,806 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Tutorial
Step1: Unfortunately simply reporting the sample mean doesn't do much for us, as we don't know how it relates to the population mean. To get a sense for how it might relate, we can look for how much variance there is in our sample. Higher variance indicates instability and uncertainty.
Step2: This still doesn't do that much for us, to really get a sense of how our sample mean relates to the population mean we need to compute a standard error. The standard error is a measure of the variance of the sample mean.
IMPORTANT
Computing a standard error involves assuming that the way you sample is unbaised, and that the data are normal and independent. If these conditions are violated, your standard error will be wrong. There are ways of testing for this and correcting.
The formula for standard error is.
$$SE = \frac{\sigma}{\sqrt{n}}$$
Where $\sigma$ is the sample standard deviation and $n$ is the number of samples.
Step3: There is a function in scipy's stats library for calculating the standard error. Note that this function be default contains a degrees-of-freedom correction that is often not necessary (for large enough samples, it's effectively irrelevant). You can omit the correction by setting the parameter ddof to 0.
Step4: Assuming our data are normally distributed, we can use the standard error to compute our confidence interval. To do this we first set our desired confidence level, say 95%, we then determine how many standard deviations contain 95% of the mass. Turns out that the 95% of the mass lies between -1.96 and 1.96 on a standard normal distribution. When the samples are large enough (generally > 30 is taken as a threshold) the Central Limit Theorem applies and normality can be safely assumed; if sample sizes are smaller, a safer approach is to use a $t$-distribution with appropriately specified degrees of freedom. The actual way to compute the values is by using a cumulative distribution function (CDF). If you are not familiar with CDFs, inverse CDFs, and their companion PDFs, you can read about them here and here. Look here for information on the $t$-distribution. We can check the 95% number using one of the Python functions.
NOTE
Step5: Here's the trick
Now, rather than reporting our sample mean without any sense of the probability of it being correct, we can compute an interval and be much more confident that the population mean lies in that interval. To do this we take our sample mean $\mu$ and report $\left(\mu-1.96 SE , \mu+1.96SE\right)$.
This works because assuming normality, that interval will contain the population mean 95% of the time.
SUBTLETY
Step6: Further Reading
This is only a brief introduction, Wikipedia has excellent articles detailing these subjects in greater depth. Let's go back to our heights example. Since the sample size is small, we'll use a $t$-test.
Step7: There is a built-in function in scipy.stats for computing the interval. Remember to specify the degrees of freedom.
Step8: Note that as your confidence increases, the interval necessarily widens.
Assuming normality, there's also a built in function that will compute our interval for us. This time you don't need to specify the degrees of freedom. Note that at a corresponding level of confidence, the interval calculated using the normal distribution is narrower than the interval calcuated using the $t$-distribution.
Step9: What does this mean?
Confidence intervals allow us to set our desired confidence, and then report a range that will likely contain the population mean. The higher our desired confidence, the larger range we report. In general once can never report a single point value, because the probability that any given point is the true population mean is incredibly small. Let's see how our intervals tighten as we change sample size.
Step10: Visualizing Confidence Intervals
Here is some code to visualize a confidence interval on a graph. Feel free to play around with it.
Step11: Miscalibration and Violation of Assumptions
The computation of a standard deviation, standard error, and confidence interval all rely on certain assumptions. If these assumptions are violated then the 95% confidence interval will not necessarily contain the population parameter 95% of the time. We say that in this case the confidence interval is miscalibrated. Here is an example.
Eample
Step12: It turns out that for larger sample sizes, you should see the sample mean asymptotically converge to zero. This is because the process is still centered around zero, but let's check if that's true. We'll vary the number of samples draw, and look for convergence as we increase sample size.
Step13: Definitely looks like there's some convergence, we can also check what the mean of the sample means is.
Step14: Pretty close to zero. We could also derive symbolically that the mean is zero, but let's assume that we've convinced ourselves with the simple empiral analysis. Now that we know the population mean, we can check the calibration of confidence intervals. First we'll write two helper functions which compute a naive interval for some input data, and check whether the interval contains the true mean, 0.
Step15: Now we'll run many trials, in each we'll sample some data, compute a confidence interval, and then check if the confidence interval contains the population mean. We'll keep a running tally, and we should expect to see 95% of the trials succeed if the intervals are calibrated correctly. | Python Code:
import numpy as np
import seaborn as sns
from scipy import stats
import matplotlib.pyplot as plt
# We'll set a seed here so our runs are consistent
np.random.seed(10)
# Let's define some 'true' population parameters, we'll pretend we don't know these.
POPULATION_MU = 64
POPULATION_SIGMA = 5
# Generate our sample by drawing from the population distribution
sample_size = 10
heights = np.random.normal(POPULATION_MU, POPULATION_SIGMA, sample_size)
print heights
mean_height = np.mean(heights)
print 'sample mean: ', mean_height
Explanation: Tutorial: Confidence Intervals
By Delaney Granizo-Mackenzie, Jeremiah Johnson, and Gideon Wulfsohn
Part of the Quantopian Lecture Series:
http://www.quantopian.com/lectures
http://github.com/quantopian/research_public
Notebook released under the Creative Commons Attribution 4.0 License.
Sample Mean vs. Population Mean
Sample means and population means are different. Generally, we want to know about a population mean, but we can only calculate a sample mean. We then want to use the sample mean to estimate the population mean. We use confidence intervals in an attempt to determine how accurately our sample mean estimates the population mean.
Confidence Interval
If I asked you to estimate the average height of a woman in the USA, you might do this by measuring 10 women and estimating that the mean of that sample was close to the population. Let's try that.
End of explanation
print 'sample standard deviation: ', np.std(heights)
Explanation: Unfortunately simply reporting the sample mean doesn't do much for us, as we don't know how it relates to the population mean. To get a sense for how it might relate, we can look for how much variance there is in our sample. Higher variance indicates instability and uncertainty.
End of explanation
SE = np.std(heights) / np.sqrt(sample_size)
print 'standard error: ', SE
Explanation: This still doesn't do that much for us, to really get a sense of how our sample mean relates to the population mean we need to compute a standard error. The standard error is a measure of the variance of the sample mean.
IMPORTANT
Computing a standard error involves assuming that the way you sample is unbaised, and that the data are normal and independent. If these conditions are violated, your standard error will be wrong. There are ways of testing for this and correcting.
The formula for standard error is.
$$SE = \frac{\sigma}{\sqrt{n}}$$
Where $\sigma$ is the sample standard deviation and $n$ is the number of samples.
End of explanation
stats.sem(heights, ddof=0)
Explanation: There is a function in scipy's stats library for calculating the standard error. Note that this function be default contains a degrees-of-freedom correction that is often not necessary (for large enough samples, it's effectively irrelevant). You can omit the correction by setting the parameter ddof to 0.
End of explanation
# Set up the x axis
x = np.linspace(-5,5,100)
# Here's the normal distribution
y = stats.norm.pdf(x,0,1)
plt.plot(x,y)
# Plot our bounds
plt.vlines(-1.96, 0, 1, colors='r', linestyles='dashed')
plt.vlines(1.96, 0, 1, colors='r', linestyles='dashed')
# Shade the area
fill_x = np.linspace(-1.96, 1.96, 500)
fill_y = stats.norm.pdf(fill_x, 0, 1)
plt.fill_between(fill_x, fill_y)
plt.xlabel('$\sigma$')
plt.ylabel('Normal PDF');
Explanation: Assuming our data are normally distributed, we can use the standard error to compute our confidence interval. To do this we first set our desired confidence level, say 95%, we then determine how many standard deviations contain 95% of the mass. Turns out that the 95% of the mass lies between -1.96 and 1.96 on a standard normal distribution. When the samples are large enough (generally > 30 is taken as a threshold) the Central Limit Theorem applies and normality can be safely assumed; if sample sizes are smaller, a safer approach is to use a $t$-distribution with appropriately specified degrees of freedom. The actual way to compute the values is by using a cumulative distribution function (CDF). If you are not familiar with CDFs, inverse CDFs, and their companion PDFs, you can read about them here and here. Look here for information on the $t$-distribution. We can check the 95% number using one of the Python functions.
NOTE: Be careful when applying the Central Limit Theorem, however, as many datasets in finance are fundamentally non-normal and it is not safe to apply the theorem casually or without attention to subtlety.
We can visualize the 95% mass bounds here.
End of explanation
np.random.seed(8309)
n = 100 # number of samples to take
samples = [np.random.normal(loc=0, scale=1, size=100) for _ in range(n)]
fig, ax = plt.subplots(figsize=(10, 7))
for i in np.arange(1, n, 1):
sample_mean = np.mean(samples[i]) # calculate sample mean
se = stats.sem(samples[i]) # calculate sample standard error
h = se*stats.t.ppf((1+0.95)/2, len(samples[i])-1) # calculate t; 2nd param is d.o.f.
sample_ci = [sample_mean - h, sample_mean + h]
if ((sample_ci[0] <= 0) and (0 <= sample_ci[1])):
plt.plot((sample_ci[0], sample_ci[1]), (i, i), color='blue', linewidth=1);
plt.plot(np.mean(samples[i]), i, 'bo');
else:
plt.plot((sample_ci[0], sample_ci[1]), (i, i), color='red', linewidth=1);
plt.plot(np.mean(samples[i]), i, 'ro');
plt.axvline(x=0, ymin=0, ymax=1, linestyle='--', label = 'Population Mean');
plt.legend(loc='best');
plt.title('100 95% Confidence Intervals for mean of 0');
Explanation: Here's the trick
Now, rather than reporting our sample mean without any sense of the probability of it being correct, we can compute an interval and be much more confident that the population mean lies in that interval. To do this we take our sample mean $\mu$ and report $\left(\mu-1.96 SE , \mu+1.96SE\right)$.
This works because assuming normality, that interval will contain the population mean 95% of the time.
SUBTLETY:
In any given case, the true value of the estimate and the bounds of the confidence interval are fixed. It is incorrect to say that "The national mean female height is between 63 and 65 inches with 95% probability," but unfortunately this is a very common misinterpretation. Rather, the 95% refers instead to the fact that over many computations of a 95% confidence interval, the true value will be in the interval in 95% of the cases (assuming correct calibration of the confidence interval, which we will discuss later). But in fact for a single sample and the single confidence interval computed from it, we have no way of assessing the probability that the interval contains the population mean. The visualization below demonstrates this.
In the code block below, there are two things to note. First, although the sample size is sufficiently large to assume normality, we're using a $t$-distribution, just to demonstrate how it is used. Second, the $t$-values needed (analogous to the $\pm1.96$ used above) are being calculated from the inverted cumulative density function, the ppf in scipy.stats. The $t$-distribution requires the extra parameter degrees of freedom (d.o.f), which is the size of the sample minus one.
End of explanation
# standard error SE was already calculated
t_val = stats.t.ppf((1+0.95)/2, 9) # d.o.f. = 10 - 1
print 'sample mean height:', mean_height
print 't-value:', t_val
print 'standard error:', SE
print 'confidence interval:', (mean_height - t_val * SE, mean_height + t_val * SE)
Explanation: Further Reading
This is only a brief introduction, Wikipedia has excellent articles detailing these subjects in greater depth. Let's go back to our heights example. Since the sample size is small, we'll use a $t$-test.
End of explanation
print '99% confidence interval:', stats.t.interval(0.99, df=9,
loc=mean_height, scale=SE)
print '95% confidence interval:', stats.t.interval(0.95, df = 9,
loc=mean_height, scale=SE)
print '80% confidence interval:', stats.t.interval(0.8, df = 9,
loc=mean_height, scale=SE)
Explanation: There is a built-in function in scipy.stats for computing the interval. Remember to specify the degrees of freedom.
End of explanation
print stats.norm.interval(0.99, loc=mean_height, scale=SE)
print stats.norm.interval(0.95, loc=mean_height, scale=SE)
print stats.norm.interval(0.80, loc=mean_height, scale=SE)
Explanation: Note that as your confidence increases, the interval necessarily widens.
Assuming normality, there's also a built in function that will compute our interval for us. This time you don't need to specify the degrees of freedom. Note that at a corresponding level of confidence, the interval calculated using the normal distribution is narrower than the interval calcuated using the $t$-distribution.
End of explanation
np.random.seed(10)
sample_sizes = [10, 100, 1000]
for s in sample_sizes:
heights = np.random.normal(POPULATION_MU, POPULATION_SIGMA, s)
SE = np.std(heights) / np.sqrt(s)
print stats.norm.interval(0.95, loc=mean_height, scale=SE)
Explanation: What does this mean?
Confidence intervals allow us to set our desired confidence, and then report a range that will likely contain the population mean. The higher our desired confidence, the larger range we report. In general once can never report a single point value, because the probability that any given point is the true population mean is incredibly small. Let's see how our intervals tighten as we change sample size.
End of explanation
sample_size = 100
heights = np.random.normal(POPULATION_MU, POPULATION_SIGMA, sample_size)
SE = np.std(heights) / np.sqrt(sample_size)
(l, u) = stats.norm.interval(0.95, loc=np.mean(heights), scale=SE)
print (l, u)
plt.hist(heights, bins=20)
plt.xlabel('Height')
plt.ylabel('Frequency')
# Just for plotting
y_height = 5
plt.plot([l, u], [y_height, y_height], '-', color='r', linewidth=4, label='Confidence Interval')
plt.plot(np.mean(heights), y_height, 'o', color='r', markersize=10);
Explanation: Visualizing Confidence Intervals
Here is some code to visualize a confidence interval on a graph. Feel free to play around with it.
End of explanation
def generate_autocorrelated_data(theta, mu, sigma, N):
# Initialize the array
X = np.zeros((N, 1))
for t in range(1, N):
# X_t = theta * X_{t-1} + epsilon
X[t] = theta * X[t-1] + np.random.normal(mu, sigma)
return X
X = generate_autocorrelated_data(0.5, 0, 1, 100)
plt.plot(X);
plt.xlabel('t');
plt.ylabel('X[t]');
Explanation: Miscalibration and Violation of Assumptions
The computation of a standard deviation, standard error, and confidence interval all rely on certain assumptions. If these assumptions are violated then the 95% confidence interval will not necessarily contain the population parameter 95% of the time. We say that in this case the confidence interval is miscalibrated. Here is an example.
Eample: Autocorrelated Data
If your data generating process is autocorrelated, then estimates of standard deviation will be wrong. This is because autocorrelated processes tend to produce more extreme values than normally distributed processes. This is due to new values being dependent on previous values, series that are already far from the mean are likely to stay far from the mean. To check this we'll generate some autocorrelated data according to the following process.
$$X_t = \theta X_{t-1} + \epsilon$$
$$\epsilon \sim \mathcal{N}(0,1)$$
End of explanation
sample_means = np.zeros(200-1)
for i in range(1, 200):
X = generate_autocorrelated_data(0.5, 0, 1, i * 10)
sample_means[i-1] = np.mean(X)
plt.bar(range(1, 200), sample_means);
plt.xlabel('Sample Size');
plt.ylabel('Sample Mean');
Explanation: It turns out that for larger sample sizes, you should see the sample mean asymptotically converge to zero. This is because the process is still centered around zero, but let's check if that's true. We'll vary the number of samples draw, and look for convergence as we increase sample size.
End of explanation
np.mean(sample_means)
Explanation: Definitely looks like there's some convergence, we can also check what the mean of the sample means is.
End of explanation
def compute_unadjusted_interval(X):
T = len(X)
# Compute mu and sigma MLE
mu = np.mean(X)
sigma = np.std(X)
SE = sigma / np.sqrt(T)
# Compute the bounds
return stats.norm.interval(0.95, loc=mu, scale=SE)
# We'll make a function that returns true when the computed bounds contain 0
def check_unadjusted_coverage(X):
l, u = compute_unadjusted_interval(X)
# Check to make sure l <= 0 <= u
if l <= 0 and u >= 0:
return True
else:
return False
Explanation: Pretty close to zero. We could also derive symbolically that the mean is zero, but let's assume that we've convinced ourselves with the simple empiral analysis. Now that we know the population mean, we can check the calibration of confidence intervals. First we'll write two helper functions which compute a naive interval for some input data, and check whether the interval contains the true mean, 0.
End of explanation
T = 100
trials = 500
times_correct = 0
for i in range(trials):
X = generate_autocorrelated_data(0.5, 0, 1, T)
if check_unadjusted_coverage(X):
times_correct += 1
print 'Empirical Coverage: ', times_correct/float(trials)
print 'Expected Coverage: ', 0.95
Explanation: Now we'll run many trials, in each we'll sample some data, compute a confidence interval, and then check if the confidence interval contains the population mean. We'll keep a running tally, and we should expect to see 95% of the trials succeed if the intervals are calibrated correctly.
End of explanation |
11,807 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Matrix generation
Init symbols for sympy
Step1: Lame params
Step2: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
Step3: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
Step4: Christoffel symbols
Step5: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
Step6: Physical coordinates
$u_i=u_{[i]} H_i$
Step7: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
Step8: Virtual work
Step9: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
Step10: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
Step11: Mass matrix | Python Code:
from sympy import *
from geom_util import *
from sympy.vector import CoordSys3D
N = CoordSys3D('N')
alpha1, alpha2, alpha3 = symbols("alpha_1 alpha_2 alpha_3", real = True, positive=True)
init_printing()
%matplotlib inline
%reload_ext autoreload
%autoreload 2
%aimport geom_util
Explanation: Matrix generation
Init symbols for sympy
End of explanation
H1=symbols('H1')
H2=S(1)
H3=S(1)
H=[H1, H2, H3]
DIM=3
dH = zeros(DIM,DIM)
for i in range(DIM):
for j in range(DIM):
if (i == 0 and j != 1):
dH[i,j]=Symbol('H_{{{},{}}}'.format(i+1,j+1))
dH
Explanation: Lame params
End of explanation
G_up = getMetricTensorUpLame(H1, H2, H3)
Explanation: Metric tensor
${\displaystyle \hat{G}=\sum_{i,j} g^{ij}\vec{R}_i\vec{R}_j}$
End of explanation
G_down = getMetricTensorDownLame(H1, H2, H3)
Explanation: ${\displaystyle \hat{G}=\sum_{i,j} g_{ij}\vec{R}^i\vec{R}^j}$
End of explanation
DIM=3
G_down_diff = MutableDenseNDimArray.zeros(DIM, DIM, DIM)
for i in range(DIM):
for j in range(DIM):
for k in range(DIM):
G_down_diff[i,i,k]=2*H[i]*dH[i,k]
GK = getChristoffelSymbols2(G_up, G_down_diff, (alpha1, alpha2, alpha3))
GK
Explanation: Christoffel symbols
End of explanation
def row_index_to_i_j_grad(i_row):
return i_row // 3, i_row % 3
B = zeros(9, 12)
B[0,1] = S(1)
B[1,2] = S(1)
B[2,3] = S(1)
B[3,5] = S(1)
B[4,6] = S(1)
B[5,7] = S(1)
B[6,9] = S(1)
B[7,10] = S(1)
B[8,11] = S(1)
for row_index in range(9):
i,j=row_index_to_i_j_grad(row_index)
B[row_index, 0] = -GK[i,j,0]
B[row_index, 4] = -GK[i,j,1]
B[row_index, 8] = -GK[i,j,2]
B
Explanation: Gradient of vector
$
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)
=
B \cdot
\left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right)
= B \cdot D \cdot
\left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right)
$
End of explanation
P=zeros(12,12)
P[0,0]=H[0]
P[1,0]=dH[0,0]
P[1,1]=H[0]
P[2,0]=dH[0,1]
P[2,2]=H[0]
P[3,0]=dH[0,2]
P[3,3]=H[0]
P[4,4]=H[1]
P[5,4]=dH[1,0]
P[5,5]=H[1]
P[6,4]=dH[1,1]
P[6,6]=H[1]
P[7,4]=dH[1,2]
P[7,7]=H[1]
P[8,8]=H[2]
P[9,8]=dH[2,0]
P[9,9]=H[2]
P[10,8]=dH[2,1]
P[10,10]=H[2]
P[11,8]=dH[2,2]
P[11,11]=H[2]
P=simplify(P)
P
B_P = zeros(9,9)
for i in range(3):
for j in range(3):
row_index = i*3+j
B_P[row_index, row_index] = 1/(H[i]*H[j])
Grad_U_P = simplify(B_P*B*P)
Grad_U_P
Explanation: Physical coordinates
$u_i=u_{[i]} H_i$
End of explanation
E=zeros(6,9)
E[0,0]=1
E[1,4]=1
E[2,8]=1
E[3,1]=1
E[3,3]=1
E[4,2]=1
E[4,6]=1
E[5,5]=1
E[5,7]=1
E
StrainL=simplify(E*Grad_U_P)
StrainL
def E_NonLinear(grad_u):
N = 3
du = zeros(N, N)
# print("===Deformations===")
for i in range(N):
for j in range(N):
index = i*N+j
du[j,i] = grad_u[index]
# print("========")
I = eye(3)
a_values = S(1)/S(2) * du * G_up
E_NL = zeros(6,9)
E_NL[0,0] = a_values[0,0]
E_NL[0,3] = a_values[0,1]
E_NL[0,6] = a_values[0,2]
E_NL[1,1] = a_values[1,0]
E_NL[1,4] = a_values[1,1]
E_NL[1,7] = a_values[1,2]
E_NL[2,2] = a_values[2,0]
E_NL[2,5] = a_values[2,1]
E_NL[2,8] = a_values[2,2]
E_NL[3,1] = 2*a_values[0,0]
E_NL[3,4] = 2*a_values[0,1]
E_NL[3,7] = 2*a_values[0,2]
E_NL[4,0] = 2*a_values[2,0]
E_NL[4,3] = 2*a_values[2,1]
E_NL[4,6] = 2*a_values[2,2]
E_NL[5,2] = 2*a_values[1,0]
E_NL[5,5] = 2*a_values[1,1]
E_NL[5,8] = 2*a_values[1,2]
return E_NL
%aimport geom_util
u=getUHat3DPlane(alpha1, alpha2, alpha3)
# u=getUHatU3Main(alpha1, alpha2, alpha3)
gradu=B*u
E_NL = E_NonLinear(gradu)*B
E_NL
%aimport geom_util
u=getUHatU3MainPlane(alpha1, alpha2, alpha3)
gradup=Grad_U_P*u
# e=E*gradup
# e
E_NLp = E_NonLinear(gradup)*gradup
simplify(E_NLp)
Explanation: Strain tensor
$
\left(
\begin{array}{c}
\varepsilon_{11} \
\varepsilon_{22} \
\varepsilon_{33} \
2\varepsilon_{12} \
2\varepsilon_{13} \
2\varepsilon_{23} \
\end{array}
\right)
=
\left(E + E_{NL} \left( \nabla \vec{u} \right) \right) \cdot
\left(
\begin{array}{c}
\nabla_1 u_1 \ \nabla_2 u_1 \ \nabla_3 u_1 \
\nabla_1 u_2 \ \nabla_2 u_2 \ \nabla_3 u_2 \
\nabla_1 u_3 \ \nabla_2 u_3 \ \nabla_3 u_3 \
\end{array}
\right)$
End of explanation
%aimport geom_util
C_tensor = getIsotropicStiffnessTensor()
C = convertStiffnessTensorToMatrix(C_tensor)
C
StrainL.T*C*StrainL*H1
Explanation: Virtual work
End of explanation
T=zeros(12,6)
T[0,0]=1
T[0,2]=alpha3
T[1,1]=1
T[1,3]=alpha3
T[3,2]=1
T[8,4]=1
T[9,5]=1
T
D_p_T = StrainL*T
simplify(D_p_T)
u = Function("u")
t = Function("theta")
w = Function("w")
u1=u(alpha1)+alpha3*t(alpha1)
u3=w(alpha1)
gu = zeros(12,1)
gu[0] = u1
gu[1] = u1.diff(alpha1)
gu[3] = u1.diff(alpha3)
gu[8] = u3
gu[9] = u3.diff(alpha1)
gradup=Grad_U_P*gu
# E_NLp = E_NonLinear(gradup)*gradup
# simplify(E_NLp)
# gradup=Grad_U_P*gu
# o20=(K*u(alpha1)-w(alpha1).diff(alpha1)+t(alpha1))/2
# o21=K*t(alpha1)
# O=1/2*o20*o20+alpha3*o20*o21-alpha3*K/2*o20*o20
# O=expand(O)
# O=collect(O,alpha3)
# simplify(O)
StrainNL = E_NonLinear(gradup)*gradup
StrainL*gu+simplify(StrainNL)
Explanation: Tymoshenko theory
$u_1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u\left( \alpha_1 \right)+\alpha_3\gamma \left( \alpha_1 \right) $
$u_2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u_3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=w\left( \alpha_1 \right) $
$ \left(
\begin{array}{c}
u_1 \
\frac { \partial u_1 } { \partial \alpha_1} \
\frac { \partial u_1 } { \partial \alpha_2} \
\frac { \partial u_1 } { \partial \alpha_3} \
u_2 \
\frac { \partial u_2 } { \partial \alpha_1} \
\frac { \partial u_2 } { \partial \alpha_2} \
\frac { \partial u_2 } { \partial \alpha_3} \
u_3 \
\frac { \partial u_3 } { \partial \alpha_1} \
\frac { \partial u_3 } { \partial \alpha_2} \
\frac { \partial u_3 } { \partial \alpha_3} \
\end{array}
\right) = T \cdot
\left(
\begin{array}{c}
u \
\frac { \partial u } { \partial \alpha_1} \
\gamma \
\frac { \partial \gamma } { \partial \alpha_1} \
w \
\frac { \partial w } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
L=zeros(12,12)
h=Symbol('h')
p0=1/2-alpha3/h
p1=1/2+alpha3/h
p2=1-(2*alpha3/h)**2
L[0,0]=p0
L[0,2]=p1
L[0,4]=p2
L[1,1]=p0
L[1,3]=p1
L[1,5]=p2
L[3,0]=p0.diff(alpha3)
L[3,2]=p1.diff(alpha3)
L[3,4]=p2.diff(alpha3)
L[8,6]=p0
L[8,8]=p1
L[8,10]=p2
L[9,7]=p0
L[9,9]=p1
L[9,11]=p2
L[11,6]=p0.diff(alpha3)
L[11,8]=p1.diff(alpha3)
L[11,10]=p2.diff(alpha3)
L
D_p_L = StrainL*L
simplify(D_p_L)
p0_2=p0*p0
p01=p0*p1
p02=p0*p2
p1_2=p1*p1
p12=p1*p2
p2_2=p2*p2
p0_2i=integrate(p0_2, (alpha3, -h/2, h/2))
p01i=integrate(p01, (alpha3, -h/2, h/2))
p02i=integrate(p02, (alpha3, -h/2, h/2))
p1_2i=integrate(p1_2, (alpha3, -h/2, h/2))
p12i=integrate(p12, (alpha3, -h/2, h/2))
p2_2i=integrate(p2_2, (alpha3, -h/2, h/2))
# p0_2i = simplify(p0_2i)
# p01i = expand(simplify(p01i))
# p02i = expand(simplify(p02i))
# p1_2i = expand(simplify(p1_2i))
# p12i = expand(simplify(p12i))
# p2_2i = expand(simplify(p2_2i))
p0_2i
p01i
p02i
p1_2i
p12i
p2_2i
1/6
Ct=getOrthotropicStiffnessTensor()
C=convertStiffnessTensorToMatrix(Ct)
LC=zeros(6,9)
LC[0,0]=p0
LC[0,1]=p1
LC[0,2]=p2
LC[2,3]=p0
LC[2,4]=p1
LC[2,5]=p2
LC[4,6]=p0
LC[4,7]=p1
LC[4,8]=p2
e = LC.T*C*LC
integrate(e, (alpha3, -h/2, h/2))
Explanation: Square theory
$u^1 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{10}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{11}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{12}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$u^2 \left( \alpha_1, \alpha_2, \alpha_3 \right)=0 $
$u^3 \left( \alpha_1, \alpha_2, \alpha_3 \right)=u_{30}\left( \alpha_1 \right)p_0\left( \alpha_3 \right)+u_{31}\left( \alpha_1 \right)p_1\left( \alpha_3 \right)+u_{32}\left( \alpha_1 \right)p_2\left( \alpha_3 \right) $
$ \left(
\begin{array}{c}
u^1 \
\frac { \partial u^1 } { \partial \alpha_1} \
\frac { \partial u^1 } { \partial \alpha_2} \
\frac { \partial u^1 } { \partial \alpha_3} \
u^2 \
\frac { \partial u^2 } { \partial \alpha_1} \
\frac { \partial u^2 } { \partial \alpha_2} \
\frac { \partial u^2 } { \partial \alpha_3} \
u^3 \
\frac { \partial u^3 } { \partial \alpha_1} \
\frac { \partial u^3 } { \partial \alpha_2} \
\frac { \partial u^3 } { \partial \alpha_3} \
\end{array}
\right) = L \cdot
\left(
\begin{array}{c}
u_{10} \
\frac { \partial u_{10} } { \partial \alpha_1} \
u_{11} \
\frac { \partial u_{11} } { \partial \alpha_1} \
u_{12} \
\frac { \partial u_{12} } { \partial \alpha_1} \
u_{30} \
\frac { \partial u_{30} } { \partial \alpha_1} \
u_{31} \
\frac { \partial u_{31} } { \partial \alpha_1} \
u_{32} \
\frac { \partial u_{32} } { \partial \alpha_1} \
\end{array}
\right) $
End of explanation
rho=Symbol('rho')
B_h=zeros(3,12)
B_h[0,0]=1
B_h[1,4]=1
B_h[2,8]=1
M=simplify(rho*P.T*B_h.T*G_up*B_h*P)
M
M_p = L.T*M*L
integrate(M_p, (alpha3, -h/2, h/2))
omega, t=symbols("\omega, t")
c=cos(omega*t)
c2=cos(omega*t)*cos(omega*t)
c3=cos(omega*t)*cos(omega*t)*cos(omega*t)
T=2*pi/omega
# omega*T/4
integrate(c, (t, 0, T/4))/T
Explanation: Mass matrix
End of explanation |
11,808 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework
Step1: Part 1
Step2: Part 3 | Python Code:
! curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/08-Lists/test-fudgemart-products.txt -o test-fudgemart-products.txt
! curl https://raw.githubusercontent.com/mafudge/datasets/master/ist256/08-Lists/fudgemart-products.txt -o fudgemart-products.txt
Explanation: Homework: The Fudgemart Products Catalog
The Problem
Fudgemart, a knockoff of a company with a similar name, has hired you to create a program to browse their product catalog.
Write an ipython interactive program that allows the user to select a product category from the drop-down and then displays all of the fudgemart products within that category. You can accomplish this any way you like and the only requirements are you must:
load each product from the fudgemart-products.txt file into a list.
build the list of product catagories dynamically ( you cannot hard-code the categories in)
print the product name and price for all products selected
use ipython interact to create a drop-down for the user interface.
FILE FORMAT:
the file fudgemart-products.txt has one row per product
each row is delimited by a | character.
there are three items in each row. category, product name, and price.
Example Row: Hardware|Ball Peen Hammer|15.99
Category = Hardware
Product = Ball Peen Hammer
Price = 15.99
HINTS:
Draw upon the lessons and examples in the lab and small group. We covered using interact with a dropdown, reading from files into lists, etc.
There is a sample file, test-fudgemart-products.txt which you can use to test your code and not have to deal with the number of rows in the actual file fudgemart-products.txt. Your code should work with either file. The test file has 3 products and 2 categories. One it works with the test file, switch to the other file!
The unique challenge of this homework creating the list of prodct categories. You can do this when you read the file or use the list of all products to create the categories.
Code to fetch data files
End of explanation
# Step 2: Write code here
Explanation: Part 1: Problem Analysis
Inputs:
TODO: Inputs
Outputs:
TODO: Outputs
Algorithm (Steps in Program):
```
TODO:Steps Here
```
Part 2: Code Solution
You may write your code in several cells, but place the complete, final working copy of your code solution within this single cell below. Only the within this cell will be considered your solution. Any imports or user-defined functions should be copied into this cell.
End of explanation
# run this code to turn in your work!
from coursetools.submission import Submission
Submission().submit()
Explanation: Part 3: Questions
Explain the approach you used to build the prodcut categories.
--== Double-Click and Write Your Answer Below This Line ==--
If you opened the fudgemart-products.txt and added a new product row at the end, would your program still run? Explain.
--== Double-Click and Write Your Answer Below This Line ==--
Did you write any user-defined functions? If so, why? If not, why not?
--== Double-Click and Write Your Answer Below This Line ==--
Part 4: Reflection
Reflect upon your experience completing this assignment. This should be a personal narrative, in your own voice, and cite specifics relevant to the activity as to help the grader understand how you arrived at the code you submitted. Things to consider touching upon: Elaborate on the process itself. Did your original problem analysis work as designed? How many iterations did you go through before you arrived at the solution? Where did you struggle along the way and how did you overcome it? What did you learn from completing the assignment? What do you need to work on to get better? What was most valuable and least valuable about this exercise? Do you have any suggestions for improvements?
To make a good reflection, you should journal your thoughts, questions and comments while you complete the exercise.
Keep your response to between 100 and 250 words.
--== Double-Click and Write Your Reflection Below Here ==--
End of explanation |
11,809 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Section3a - Dataset preparation
We will start by deciding which features we want to use in the machine learning prediction of the severity of the accident for each user.
We'll also do a final clean up of the features to remove bad entries.
The new dataframe with the target and wanted features will be entirely stored in a MySQL database to be used later on in the rest of the Section3.
The creation of the training and testing samples from the dataset stored in the MySQL database will be detailed at the end of this section. This last process creating these samples will be implemented in the file CreateTrainAndTestSamples.py to be called in the following part of Section3.
Dataset details
The target data is the severity column in the Users dataframe.
The features useful in the machine learning to predict the severity of the accident are
Step1: Target and Features from the Users dataframe
We start with the Users dataframe because each entry in our features dataframe will correspond to one road user involved in an accident and have the target.
Step2: We keep the original numbers of entries in this dataframe to know how much we lost overall during the preparation of the dataset.
Step3: A few different corrections are needed
Step4: Features from the Characteristics dataframe
Step5: The intersection type has values 0 which do not correspond to anything but they concern very few instances overall. We can drop them.
On the other hand some weather and collision type entries contain NaN. The categories 9 and 6 are for "other" respectively in weather and collision type. Since very few entries have NaN, we can put the NaN in this vague category.
Step6: Features from the Vehicles dataframe
Step7: Vehicle type is perfectly fine. The other 4 columns have a very small number of entries with NaNs and a large number of zeros representing the missing information or in the case of the obj hit the fact that no obj, fixed or moving, was hit. SO we can replace the NaNs with zeros.
Step8: Features from the Locations dataframe
Step9: For most columns we can again simply replace NaN by 0 which corresponds to an "unknown" category.
The columns "road width" and "installations" have respectively 35% and 90% of zeros so it is preferable to drop them.
Step10: Engineered features
From the existing features we can derive new features to improve the predicting power.
Weight differential
The relative size of the vehicles involved in an accident has a direct impact on the gravity of the accident. A complementary piece of information would be the speed to deduce the momentum however we do not have any data on that.
When going through the dataset user by user, the vehicle information associated to each driver or passenger ('user type' = 1 or 2) correspond to their own vehicle therefore we have no information on the other vehicles in the accident. If the entry is for a pedestrian ('user type' = 3 or 4) then the associated vehicle is the vehicle that hit the pedestrian.
We will create a new column for the weight differential taking into account the two cases
Step11: Randomize entries
In order to facilitate the creation of test and training samples in the next section, we shuffle now the entries in the dataframe prior to storing them in the SQL database.
Step12: Remove irrelevant columns
We do not need the accident and vehicle IDs anymore since we have created the dataset and don't need to relate the rows with one another anymore, so we can remove them.
Step13: Correct type of categorical columns
Most of the categorical variables are encoded as integers however their columns in the dataframe have a float type.
This is potentially important to save disk space when we store the dataset but also to convert the categorical variables into binary columns later on.
Step14: Of all the columns containing floats, the age is the only one that can be stored as float.
Step15: Store features in database
We store the dataframe in Features table of the database to avoid recreating the dataframe whenever we want to try a different machine learning algorithm.
First let's check how many events we lost while cleaning up the dataset.
Step16: Creation of the training and testing samples
This section will explain the code in CreateTrainAndTestSamples.py which will be used in the other parts of section3 to create the training and testing samples out of the dataset in the MySQL dataset.
Step17: Imbalance of severity classes
The four classes of severity are very imbalance with the classe 2 (death) being significantly (and fortunately) smaller than the other classes
Step18: This could be a big problem for the training of the algorithms since they would predict Indemn and Lightly injured much more often and that would naturally increase the accuracy.
One solution is create a training set that contains a balanced set of the 4 classes.
Step19: Create the training sample
Since we already shuffled the entries in the dataset prior to storing them in the SQL database, we can just use X entries to have a totally random sample for testing. However for the training sample we need to correct for the class imbalance first.
We will use part of the original dataset to create our training sample.
Step20: Now our training sample is totally balanced.
Convert categorical variables
In order to process the dataset through machine learning algorithms we must first convert each category into a binary variable. Pretty much all the columns are categories except for the age and the weight differential so we'll remove them from the list of columns that needs to be converted.
Step21: Rescale the continuous variables
Many machine learning algorithms will performed better if all the variables have the sample scale. At the moment the categorical variables take values 0 and 1. The weight differential range on the other hand is few orders of magnitude larger.
Step22: Create the testing sample
We must convert the categorical variables and rescale the continuous variables using the same scaling factors as for the training sample. | Python Code:
import pandas as pd
import numpy as np
# Provides better color palettes
import seaborn as sns
from pandas import DataFrame,Series
import matplotlib as mpl
import matplotlib.pyplot as plt
# Command to display the plots in the iPython Notebook
%matplotlib inline
import matplotlib.patches as mpatches
mpl.style.use('seaborn-whitegrid')
plt.style.use('seaborn-talk')
# Extract the list of colors from this style for later use
cycl = mpl.rcParams['axes.prop_cycle']
colors = cycl.by_key()['color']
from CSVtoSQLconverter import load_sql_engine
sqlEngine = load_sql_engine()
Explanation: Section3a - Dataset preparation
We will start by deciding which features we want to use in the machine learning prediction of the severity of the accident for each user.
We'll also do a final clean up of the features to remove bad entries.
The new dataframe with the target and wanted features will be entirely stored in a MySQL database to be used later on in the rest of the Section3.
The creation of the training and testing samples from the dataset stored in the MySQL database will be detailed at the end of this section. This last process creating these samples will be implemented in the file CreateTrainAndTestSamples.py to be called in the following part of Section3.
Dataset details
The target data is the severity column in the Users dataframe.
The features useful in the machine learning to predict the severity of the accident are:
- From Users dataframe
- Location in vehicle
- User type
- Sexe
- Age
- Journey type
- Safety gear type
- Safety gear worn
- Pedestrian location (Too many entries set to "not recorded" => not used)
- Pedestrian action (Too many entries set to "not recorded" => not used)
- From characteristics dataframe
- Luminosity
- In city
- Intersect type
- Weather
- Collision type
- From Locations dataframe
- Road type
- Traffic mode
- Nb Lanes
- Road profil
- Road surface
- Road width (Too many entries set to "not recorded" => not used)
- Installations (Too many entries set to "not recorded" => not used)
- Location
- From Vehicles dataframe
- Vehicle type
- Fixed object hit
- Moving obj hit
- Impact location
- Maneuver
- Engineered variables
- Max weight differential (Between the vehicle of the user and the heaviest vehicle in the accident, or the weight of the pedestrian and the weight of the vehicle that hit them)
End of explanation
users_df = pd.read_sql_query('SELECT * FROM safer_roads.users',
sqlEngine)
users_df.head()
Explanation: Target and Features from the Users dataframe
We start with the Users dataframe because each entry in our features dataframe will correspond to one road user involved in an accident and have the target.
End of explanation
original_nb_entries = users_df.shape[0]
pd.options.display.float_format = '{:20,.2f}'.format
def check_columns(dataf,columns):
zeroes_sr = (dataf[columns] == 0).astype(int).sum(axis=0)
neg_sr = (dataf[columns] < 0).astype(int).sum(axis=0)
nans_sr = dataf[columns].isnull().sum()
nbent = dataf.shape[0]
rows = []
rows.append(DataFrame(zeroes_sr,columns=['nb zeroes']))
rows.append(DataFrame((100*zeroes_sr/nbent),columns=['% zeroes']))
rows.append(DataFrame(neg_sr,columns=['nb neg. vals']))
rows.append(DataFrame((100*neg_sr/nbent),columns=['% neg. vals']))
rows.append(DataFrame(nans_sr,columns=['nb nans']))
rows.append(DataFrame((100*nans_sr/nbent),columns=['% nans']))
checks_df = pd.concat(rows,axis=1)
print(' - Total number of entries: {}\n'.format(dataf.shape[0]))
print('List of NaN, zero and negative entries:\n{}\n\n'.format(checks_df))
check_columns(users_df, ['location in vehicle', 'user type', 'age', 'sex',
'journey type', 'safety gear worn','safety gear type',
'pedestrian action', 'pedestrian location','severity'])
Explanation: We keep the original numbers of entries in this dataframe to know how much we lost overall during the preparation of the dataset.
End of explanation
# 1. Replace NaN with zeros
users_df['location in vehicle'].fillna(0, inplace=True)
# 2. Remove rows with NaN in age column
users_df.dropna(subset=['age'], inplace=True)
# 3. Replace NaN with "other" or "unknown" categories
users_df['journey type'].fillna(9, inplace=True)
users_df['safety gear type'].fillna(9, inplace=True)
users_df['safety gear worn'].fillna(3, inplace=True)
users_df.replace(to_replace={'journey type':{0:9}}, inplace=True)
check_columns(users_df, ['location in vehicle', 'user type', 'age', 'sex',
'journey type', 'safety gear worn','safety gear type',
'severity'])
feat_target_df = users_df[['accident id','vehicle id','severity','location in vehicle', 'user type',
'age', 'sex', 'journey type', 'safety gear worn','safety gear type']]
feat_target_df.head()
Explanation: A few different corrections are needed:
1. For "location in vehicle" we can replace the NaN with zeros as a consistent place holder to specify that this wasn't recorded.
1. The entries with NaN values in the "age" column can be safely removed since they represent very few entries. The zeroes in that column should be babies less than one year old.
1. The "journey type", "safety gear worn", and "safety gear type" have a category "unknown" or "other" which we can use to replace the NaN entries. We can also use this category for the entries with zero in journey type.
1. The "pedestrian action" and "pedestrian location" are mostly filled with zeroes indicating that they were not recorded therefore we won't use them as features.
End of explanation
charact_df = pd.read_sql_query('SELECT * FROM safer_roads.characteristics',
sqlEngine)
charact_df.head()
wanted_cols = ['luminosity', 'in city', 'intersect type', 'weather', 'collision type']
check_columns(charact_df, wanted_cols)
Explanation: Features from the Characteristics dataframe
End of explanation
charact_df['weather'].fillna(9,inplace=True)
charact_df['collision type'].fillna(6,inplace=True)
charact_df = charact_df[charact_df['intersect type'] != 0]
check_columns(charact_df, wanted_cols)
wanted_cols.append('accident id')
feat_target_df = feat_target_df.merge(charact_df[wanted_cols],
on=['accident id'],how='inner')
print(' -> Number of entries in the dataset: {}\n'.format(feat_target_df.shape[0]))
feat_target_df.head()
Explanation: The intersection type has values 0 which do not correspond to anything but they concern very few instances overall. We can drop them.
On the other hand some weather and collision type entries contain NaN. The categories 9 and 6 are for "other" respectively in weather and collision type. Since very few entries have NaN, we can put the NaN in this vague category.
End of explanation
vehicles_df = pd.read_sql_query('SELECT * FROM safer_roads.vehicles',
sqlEngine)
vehicles_df.head()
wanted_cols = ['vehicle type', 'fixed obj hit', 'moving obj hit',
'impact location', 'maneuver']
check_columns(vehicles_df, wanted_cols)
Explanation: Features from the Vehicles dataframe
End of explanation
for col in wanted_cols[1:]:
vehicles_df[col].fillna(0,inplace=True)
wanted_cols.extend(['accident id','vehicle id'])
feat_target_df = feat_target_df.merge(vehicles_df[wanted_cols],
on=['accident id','vehicle id'],how='inner')
feat_target_df.head()
Explanation: Vehicle type is perfectly fine. The other 4 columns have a very small number of entries with NaNs and a large number of zeros representing the missing information or in the case of the obj hit the fact that no obj, fixed or moving, was hit. SO we can replace the NaNs with zeros.
End of explanation
locations_df = pd.read_sql_query('SELECT * FROM safer_roads.locations',
sqlEngine)
locations_df.head()
wanted_cols = ['road type', 'traffic mode', 'nb lanes',
'road profil', 'road alignment','road surface',
'road width', 'installations', 'location']
check_columns(locations_df, wanted_cols)
Explanation: Features from the Locations dataframe
End of explanation
wanted_cols.remove('road width')
wanted_cols.remove('installations')
for col in wanted_cols[1:]:
locations_df[col].fillna(0,inplace=True)
wanted_cols.extend(['accident id'])
feat_target_df = feat_target_df.merge(locations_df[wanted_cols],
on=['accident id'],how='inner')
feat_target_df.head()
Explanation: For most columns we can again simply replace NaN by 0 which corresponds to an "unknown" category.
The columns "road width" and "installations" have respectively 35% and 90% of zeros so it is preferable to drop them.
End of explanation
from Mapper import Vehicle_Weights
# Frist we map all the vehicle types to the average weight
feat_target_df['weight diff'] = feat_target_df['vehicle type'].map(Vehicle_Weights)
# Then calculate the differential for drivers and passengers
mask = feat_target_df['user type'].isin([1,2])
feat_target_df.ix[mask,'weight diff'] = feat_target_df.groupby('accident id')['weight diff']\
.transform(lambda x: x - x.max())
Explanation: Engineered features
From the existing features we can derive new features to improve the predicting power.
Weight differential
The relative size of the vehicles involved in an accident has a direct impact on the gravity of the accident. A complementary piece of information would be the speed to deduce the momentum however we do not have any data on that.
When going through the dataset user by user, the vehicle information associated to each driver or passenger ('user type' = 1 or 2) correspond to their own vehicle therefore we have no information on the other vehicles in the accident. If the entry is for a pedestrian ('user type' = 3 or 4) then the associated vehicle is the vehicle that hit the pedestrian.
We will create a new column for the weight differential taking into account the two cases:
- for passengers and drivers the difference will be between their vehicle and the heaviest vehicle involved in the accident
- for pedestrian will simply take the weight of the vehicle that hit them
The mapping from 'vehicle type' to a crude estimate of the average vehicle weight in kilograms is stored in the Mapper.py script.
End of explanation
feat_target_df.head()
feat_target_df = feat_target_df.reindex(np.random.permutation(feat_target_df.index))
feat_target_df.head()
Explanation: Randomize entries
In order to facilitate the creation of test and training samples in the next section, we shuffle now the entries in the dataframe prior to storing them in the SQL database.
End of explanation
feat_target_df.drop(['accident id','vehicle id'],axis=1,inplace=True)
feat_target_df.head()
Explanation: Remove irrelevant columns
We do not need the accident and vehicle IDs anymore since we have created the dataset and don't need to relate the rows with one another anymore, so we can remove them.
End of explanation
# List the columns stored as floats
float_col = feat_target_df.loc[:,feat_target_df.dtypes == type(1.0)].columns
float_col
Explanation: Correct type of categorical columns
Most of the categorical variables are encoded as integers however their columns in the dataframe have a float type.
This is potentially important to save disk space when we store the dataset but also to convert the categorical variables into binary columns later on.
End of explanation
float_col = float_col.drop('age')
for col in float_col:
feat_target_df[col] = feat_target_df[col].astype(int)
Explanation: Of all the columns containing floats, the age is the only one that can be stored as float.
End of explanation
nb_diff = feat_target_df.shape[0] - original_nb_entries
print(' We lost {} events during clean up, which represents {:.2f}% of the data.'.format(
nb_diff,(100.*nb_diff/original_nb_entries)))
# chunksize is need for this big dataframe otherwise the transfer will fail
# (unless you change your settings in MariaDB)
feat_target_df.to_sql(name='ml_dataset', con=sqlEngine, if_exists = 'replace',
index=False, chunksize = 100)
Explanation: Store features in database
We store the dataframe in Features table of the database to avoid recreating the dataframe whenever we want to try a different machine learning algorithm.
First let's check how many events we lost while cleaning up the dataset.
End of explanation
# mldata_df = pd.read_sql_query('SELECT * FROM ml_dataset',sqlEngine)
mldata_df = feat_target_df
mldata_df.head()
Explanation: Creation of the training and testing samples
This section will explain the code in CreateTrainAndTestSamples.py which will be used in the other parts of section3 to create the training and testing samples out of the dataset in the MySQL dataset.
End of explanation
def plot_severity(df):
severity_sr = df['severity'].map({1:'Indemn',2:'Dead',3:'Hospitalized injured',4:'Lightly injured'})
sns.countplot(x='severity', data=DataFrame(severity_sr),
order=['Indemn','Lightly injured','Hospitalized injured','Dead']);
plot_severity(mldata_df)
Explanation: Imbalance of severity classes
The four classes of severity are very imbalance with the classe 2 (death) being significantly (and fortunately) smaller than the other classes:
End of explanation
RatioDead = 100. * mldata_df[mldata_df['severity'] == 2].shape[0] / mldata_df.shape[0]
print('The number of road users who died in their accident represents about {:.1f}% of the total number of recorded users'.format(RatioDead))
Explanation: This could be a big problem for the training of the algorithms since they would predict Indemn and Lightly injured much more often and that would naturally increase the accuracy.
One solution is create a training set that contains a balanced set of the 4 classes.
End of explanation
raw_training_df = mldata_df.head(100000)
n_sev2 = raw_training_df[raw_training_df.severity==2].shape[0]
print("We have {} entries with severity 2. We need the same amount for the other classes.".format(n_sev2))
list_severities = []
for sev_id in range(1,5):
list_severities.append(raw_training_df[raw_training_df.severity==sev_id].head(n_sev2))
training_df = pd.concat(list_severities)
plot_severity(training_df)
Explanation: Create the training sample
Since we already shuffled the entries in the dataset prior to storing them in the SQL database, we can just use X entries to have a totally random sample for testing. However for the training sample we need to correct for the class imbalance first.
We will use part of the original dataset to create our training sample.
End of explanation
all_col = training_df.columns
categ_col = all_col.drop(['severity','age','weight diff'])
training_df = pd.get_dummies(training_df,prefix=categ_col, columns=categ_col)
training_df.head()
Explanation: Now our training sample is totally balanced.
Convert categorical variables
In order to process the dataset through machine learning algorithms we must first convert each category into a binary variable. Pretty much all the columns are categories except for the age and the weight differential so we'll remove them from the list of columns that needs to be converted.
End of explanation
training_df[['age','weight diff']].describe().loc[['min','max']]
max_age = training_df['age'].max()
training_df['age'] = training_df['age'] / max_age
max_weight_diff = training_df['weight diff'].max()
min_weight_diff = training_df['weight diff'].min()
training_df['weight diff'] = (training_df['weight diff'] - min_weight_diff) / (max_weight_diff - min_weight_diff)
training_df[['age','weight diff']].describe().loc[['min','max']]
Explanation: Rescale the continuous variables
Many machine learning algorithms will performed better if all the variables have the sample scale. At the moment the categorical variables take values 0 and 1. The weight differential range on the other hand is few orders of magnitude larger.
End of explanation
testing_df = mldata_df.head(120000).tail(20000)
all_col = testing_df.columns
categ_col = all_col.drop(['severity','age','weight diff'])
testing_df = pd.get_dummies(testing_df,prefix=categ_col, columns=categ_col)
testing_df.head()
testing_df['age'] = testing_df['age'] / max_age
testing_df['weight diff'] = (testing_df['weight diff'] - min_weight_diff) / (max_weight_diff - min_weight_diff)
testing_df[['age','weight diff']].describe().loc[['min','max']]
Explanation: Create the testing sample
We must convert the categorical variables and rescale the continuous variables using the same scaling factors as for the training sample.
End of explanation |
11,810 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: Cloud Natural Language API を使ってみよう !
Cloud Natural Language API は、エンティティ分析、感情分析、コンテンツ分類、構文解析などの自然言語を理解するための技術をデベロッパーに提供します。
API Discovery Service を利用して Cloud Natural Language API を発見します。 Cloud Natural Language の REST API 仕様は こちら に解説されています。
Step2: Cloud Natural Language API に入力する情報を定義します。
Step3: エンティティ分析
エンティティ分析は、指定されたテキストに既知のエンティティ(著名人、ランドマークなどの固有名詞)が含まれていないかどうかを調べて、それらのエンティティに関する情報を返します。エンティティ分析に関する REST API 仕様はこちらで解説されています。
Step4: 感情分析
感情分析は、指定されたテキストを調べて、そのテキストの背景にある感情的な考え方を分析します。具体的には、執筆者の考え方がポジティブか、ネガティブか、ニュートラルかを判断します。感情分析に関する REST API 仕様はこちらで解説されています。
Step5: コンテンツの分類 (英語のみ)
コンテンツの分類は、ドキュメントを分析し、ドキュメント内で見つかったテキストに適用されるコンテンツ カテゴリのリストを返します。ドキュメント内のコンテンツを分類するには、classifyText メソッドを呼び出します。
Step6: 構文解析
構文解析では、指定されたテキストを一連の文とトークン(通常は単語)に分解して、それらのトークンに関する言語情報を提供します。ほとんどの Natural Language API メソッドは、指定されたテキストの内容を分析しますが、analyzeSyntax メソッドは、言語自体の構造を検査します。
Step7: 演習問題
1. エンティティ分析で著名人やランドマークを抽出してみましょう。
2. 感情分析をさまざまな文章に対して実行してみましょう。
Cloud Translation API を使ってみよう !
Cloud Translation API では、数千もの言語ペアの間でダイナミックにテキストを翻訳できます。 Cloud Translation API を使えば、プログラム上でウェブサイトやアプリケーションを Google Translate と統合できます。
API Discovery Service を利用して Cloud Translation API を発見します。 Cloud Natural Language の REST API 仕様はこちらで解説されています。
Step8: Cloud Translation API に入力する情報を定義します。
Step9: テキストの翻訳
translations リクエストを使えば、入力テキストを、指定した言語に翻訳されたテキストを取得することが出来ます。入力テキストには書式なしテキストまたは HTML を使用できます。Cloud Translation API は入力テキスト内の HTML タグは翻訳せず、タグ間にあるテキストのみを翻訳します。出力では、(未翻訳の)HTML タグは保持されます。タグは翻訳されたテキストに合わせて挿入されますが、ソース言語とターゲット言語には差異があるため、その位置調整は可能な範囲に留まります。出力内の HTML タグの順序は、翻訳による語順変化のため入力テキスト内の順序と異なる場合があります。
Step10: 言語の検出
detections リクエストを使えば、入力テキストの言語を特定できます。この機能は使われている言語が不明なテキストを自動的に翻訳するときなどに利用できます。 | Python Code:
import getpass
APIKEY = getpass.getpass()
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/gcp-getting-started-lab-jp/blob/master/machine_learning/cloud_ai_building_blocks/language_ja.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
事前準備
GCP プロジェクト を作成します。
課金設定 を有効にします。
API Key を作成します。
Cloud Natural Language API と Cloud Translation API を有効にします。
Google Cloud API の認証情報を入力
Google Cloud API を REST インタフェースから利用するために、 API Key を利用します。 Google Cloud Console から API Key をコピーしましょう。
End of explanation
from googleapiclient.discovery import build
nl_service = build('language', 'v1beta2', developerKey=APIKEY)
Explanation: Cloud Natural Language API を使ってみよう !
Cloud Natural Language API は、エンティティ分析、感情分析、コンテンツ分類、構文解析などの自然言語を理解するための技術をデベロッパーに提供します。
API Discovery Service を利用して Cloud Natural Language API を発見します。 Cloud Natural Language の REST API 仕様は こちら に解説されています。
End of explanation
import textwrap
source_language = "en" #@param ["en", "ja"] {type: "string"}
source_sentence = "Classifying Opinion and Editorials can be time-consuming and difficult work for any data science team, but Cloud Natural Language was able to instantly identify clear topics with a high-level of confidence. This tool has saved me weeks, if not months, of work to achieve a level of accuracy that may not have been possible with our in-house resources." #@param {type:"string"}
document_type = "PLAIN_TEXT" #@param["PLAIN_TEXT", "HTML"] {type: "string"}
encoding_type = "UTF8" #@param["UTF8", "UTF16", "UTF32"] {type: "string"}
textwrap.wrap(source_sentence)
Explanation: Cloud Natural Language API に入力する情報を定義します。
End of explanation
response = nl_service.documents().analyzeEntities(
body={
'document': {
'type': document_type,
'content': source_sentence,
'language': source_language,
},
'encodingType': encoding_type,
}
).execute()
# Below code is extracting only proper nouns from a response message. If you
# have interest on what information is exactly contained in a response message,
# please try to explore it :D
for entity in response['entities']:
for mention in entity['mentions']:
if mention['type'] == 'PROPER':
print(mention, entity['metadata'])
Explanation: エンティティ分析
エンティティ分析は、指定されたテキストに既知のエンティティ(著名人、ランドマークなどの固有名詞)が含まれていないかどうかを調べて、それらのエンティティに関する情報を返します。エンティティ分析に関する REST API 仕様はこちらで解説されています。
End of explanation
response = nl_service.documents().analyzeSentiment(
body={
'document': {
'type': document_type,
'content': source_sentence,
'language': source_language,
},
'encodingType': encoding_type,
}
).execute()
# This shows you document-level sentiment.
response['documentSentiment']
# This shows you sentence-level sentiment.
for sentence in response['sentences']:
print(sentence['sentiment'], sentence['text']['content'])
Explanation: 感情分析
感情分析は、指定されたテキストを調べて、そのテキストの背景にある感情的な考え方を分析します。具体的には、執筆者の考え方がポジティブか、ネガティブか、ニュートラルかを判断します。感情分析に関する REST API 仕様はこちらで解説されています。
End of explanation
response = nl_service.documents().classifyText(
body={
'document': {
'type': document_type,
'content': source_sentence,
'language': source_language,
},
}
).execute()
response['categories']
Explanation: コンテンツの分類 (英語のみ)
コンテンツの分類は、ドキュメントを分析し、ドキュメント内で見つかったテキストに適用されるコンテンツ カテゴリのリストを返します。ドキュメント内のコンテンツを分類するには、classifyText メソッドを呼び出します。
End of explanation
response = nl_service.documents().analyzeSyntax(
body={
'document': {
'type': document_type,
'content': source_sentence,
'language': source_language,
},
'encodingType': encoding_type,
}
).execute()
# This shows you output of syntax analysis. To leverage this output, you may
# need domain knowledge on natural language analysis.
response['tokens']
Explanation: 構文解析
構文解析では、指定されたテキストを一連の文とトークン(通常は単語)に分解して、それらのトークンに関する言語情報を提供します。ほとんどの Natural Language API メソッドは、指定されたテキストの内容を分析しますが、analyzeSyntax メソッドは、言語自体の構造を検査します。
End of explanation
from googleapiclient.discovery import build
translate_service = build('translate', 'v2', developerKey=APIKEY)
Explanation: 演習問題
1. エンティティ分析で著名人やランドマークを抽出してみましょう。
2. 感情分析をさまざまな文章に対して実行してみましょう。
Cloud Translation API を使ってみよう !
Cloud Translation API では、数千もの言語ペアの間でダイナミックにテキストを翻訳できます。 Cloud Translation API を使えば、プログラム上でウェブサイトやアプリケーションを Google Translate と統合できます。
API Discovery Service を利用して Cloud Translation API を発見します。 Cloud Natural Language の REST API 仕様はこちらで解説されています。
End of explanation
import textwrap
source_language = "en" #@param ["en", "ja"] {type: "string"}
target_language = "ja" #@param ["en", "ja"] {type: "string"}
source_sentence = "Classifying Opinion and Editorials can be time-consuming and difficult work for any data science team, but Cloud Natural Language was able to instantly identify clear topics with a high-level of confidence. This tool has saved me weeks, if not months, of work to achieve a level of accuracy that may not have been possible with our in-house resources." #@param {type:"string"}
textwrap.wrap(source_sentence, width=50)
Explanation: Cloud Translation API に入力する情報を定義します。
End of explanation
# Note that you can change Translation model by changing 'model' parameter.
response = translate_service.translations().list(
source=source_language,
target=target_language,
q=source_sentence,
model='nmt',
format='html').execute()
text = response['translations'][0]['translatedText']
textwrap.wrap(text)
Explanation: テキストの翻訳
translations リクエストを使えば、入力テキストを、指定した言語に翻訳されたテキストを取得することが出来ます。入力テキストには書式なしテキストまたは HTML を使用できます。Cloud Translation API は入力テキスト内の HTML タグは翻訳せず、タグ間にあるテキストのみを翻訳します。出力では、(未翻訳の)HTML タグは保持されます。タグは翻訳されたテキストに合わせて挿入されますが、ソース言語とターゲット言語には差異があるため、その位置調整は可能な範囲に留まります。出力内の HTML タグの順序は、翻訳による語順変化のため入力テキスト内の順序と異なる場合があります。
End of explanation
response = translate_service.detections().list(
q=source_sentence
).execute()
response['detections']
Explanation: 言語の検出
detections リクエストを使えば、入力テキストの言語を特定できます。この機能は使われている言語が不明なテキストを自動的に翻訳するときなどに利用できます。
End of explanation |
11,811 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook shows how to use the output from VASP DFPT calculation and the phonopy interface to plot the phonon bandstructure and density of states.
Requires
Step1: Set the structure
Step2: Result from VASP DFPT calculations using the supercell structure
Step3: Initialize phonopy and set the force constants obtained from VASP
Step4: Define the paths for plotting the bandstructure and set them in phonopy
Step5: Set the mesh in reciprocal space and plot DOS | Python Code:
import os
import numpy as np
import pymatgen as pmg
from pymatgen.io.vasp.outputs import Vasprun
from phonopy import Phonopy
from phonopy.structure.atoms import Atoms as PhonopyAtoms
%matplotlib inline
Explanation: This notebook shows how to use the output from VASP DFPT calculation and the phonopy interface to plot the phonon bandstructure and density of states.
Requires: phonopy package (pip install phonopy)
Author: Kiran Mathew
End of explanation
Si_primitive = PhonopyAtoms(symbols=['Si'] * 2,
scaled_positions=[(0, 0, 0), (0.75, 0.5, 0.75)],
cell=[[3.867422 ,0.000000, 0.000000],
[1.933711, 3.349287, 0.000000],
[-0.000000, -2.232856, 3.157737]])
# supercell size
scell = [[2,0,0],[0,2,0],[0,0,2]]
Explanation: Set the structure
End of explanation
vrun = Vasprun(os.path.join(os.path.dirname(pmg.__file__), "..", 'test_files', "vasprun.xml.dfpt.phonon"))
Explanation: Result from VASP DFPT calculations using the supercell structure
End of explanation
phonon = Phonopy(Si_primitive, scell)
# negative sign to ensure consistency with phonopy convention
phonon.set_force_constants(-vrun.force_constants)
Explanation: Initialize phonopy and set the force constants obtained from VASP
End of explanation
bands = []
# path 1
q_start = np.array([0.5, 0.5, 0.0])
q_end = np.array([0.0, 0.0, 0.0])
band = []
for i in range(51):
band.append(q_start + (q_end - q_start) / 50 * i)
bands.append(band)
# path 2
q_start = np.array([0.0, 0.0, 0.0])
q_end = np.array([0.5, 0.0, 0.0])
band = []
for i in range(51):
band.append(q_start + (q_end - q_start) / 50 * i)
bands.append(band)
phonon.set_band_structure(bands)
phonon.plot_band_structure().show()
Explanation: Define the paths for plotting the bandstructure and set them in phonopy
End of explanation
mesh = [31, 31, 31]
phonon.set_mesh(mesh)
phonon.set_total_DOS()
phonon.plot_total_DOS().show()
Explanation: Set the mesh in reciprocal space and plot DOS
End of explanation |
11,812 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell
Step1: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
Step2: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate. | Python Code:
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the import statements
Explanation: Display Exercise 1
Imports
Put any needed imports needed to display rich output the following cell:
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this to grade the image display
Explanation: Basic rich display
Find a Physics related image on the internet and display it in this notebook using the Image object.
Load it using the url argument to Image (don't upload the image to this server).
Make sure the set the embed flag so the image is embedded in the notebook data.
Set the width and height to 600px.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this here to grade the quark table
Explanation: Use the HTML object to display HTML in the notebook that reproduces the table of Quarks on this page. This will require you to learn about how to create HTML tables and then pass that to the HTML object for display. Don't worry about styling and formatting the table, but you should use LaTeX where appropriate.
End of explanation |
11,813 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Вопросы
Назовите несколько видов баз данных и опишите, в чем их суть.
Что такое транзакция в терминах баз данных?
Что делает следующий запрос?
SQL
SELECT * FROM employees, managers WHERE employees.salary > 100000 AND employees.manager_id = managers.id
Что такое миграция, если своими словами?
В чем преимущества NoSQL баз перед реляционными, если грубо?
Для чего нужен Elasticsearch?
Фортран и процессоры
SSE, SSE2, SSE3 ...
Векторные операции
Суровые дяди-ученые с 80 годов пишут библиотеки, которые реализуют основные операции линейной алгебры
BLAS, LAPACK, CUDA
Матричные данные
Matlab (Octave), R, Julia, Wolfram, ..., Numpy
Numpy
http
Step1: Вопрос
Step2: Упражнение
Прочитать файл USlocalopendataportals.csv в Pandas.
Сколько различных значений в колонке Location?
Сколько порталов принадлежит государству?
Выбрать все записи только для Location, начинающихся с “New York” и “Washington D.C.”.
Какой сайт встречается в этих данных дважды?
Как это показать кодом?
“Починить” колонку Population и посчитать среднее население для каждой локации.
Визуализация
Matplotlib (http
Step3: Упражнения с Matplotlib
По датафрейму, построенному в последнем пункте предыдущего задания, построить график с горизонтальными столбиками
Взять случайную выборку (скажем, 10) по локациям и населению из прошлых упражнений. С помощью объекта pyplot построить круговую диаграмму по ним.
Подсказка - http | Python Code:
import numpy as np
np.zeros(10) # вектор из 10 нулевых элементов
np.arange(9).reshape(3,3) # матрица 3х3 с числами от 0 до 8
m = np.eye(3) # единичная матрица
m.shape # размеры матрицы
a = np.random.random((3,3,3)) # трехмерная матрица 3x3x3 со случайными значениями
# Матрица numpy - многомерный массив и позволяет делать и менять срез по каждому из измерений
a[3, 4:5, 1:20:2]
Explanation: Вопросы
Назовите несколько видов баз данных и опишите, в чем их суть.
Что такое транзакция в терминах баз данных?
Что делает следующий запрос?
SQL
SELECT * FROM employees, managers WHERE employees.salary > 100000 AND employees.manager_id = managers.id
Что такое миграция, если своими словами?
В чем преимущества NoSQL баз перед реляционными, если грубо?
Для чего нужен Elasticsearch?
Фортран и процессоры
SSE, SSE2, SSE3 ...
Векторные операции
Суровые дяди-ученые с 80 годов пишут библиотеки, которые реализуют основные операции линейной алгебры
BLAS, LAPACK, CUDA
Матричные данные
Matlab (Octave), R, Julia, Wolfram, ..., Numpy
Numpy
http://www.numpy.org/
pip install numpy или просто взять Anaconda
http://www.labri.fr/perso/nrougier/teaching/numpy.100/
https://cs231n.github.io/python-numpy-tutorial/
Примеры с Numpy
End of explanation
import pandas as pd
# чтение CSV (крутые параметры: names, chunksize, dtype)
df = pd.read_csv("USlocalopendataportals.csv")
df.columns # названия колонок
df.head(15) # просмотр верхних 15 строчек
df["column"].apply(lambda c: c / 100.0) # применение функции к колонке
df["column"].str.%строковая функция% # работа со строковыми колонками
df["column"].astype(np.float32) # привести колонку к нужному типу
# а как сделать через apply?
df.groupby("column").mean() # операция агрегации
# выбор нескольких колонок
df[["Column 1", "Column 2"]]
# создать новую колонку
df["Сolumn"] = a # массив подходящего размера
# переименование колонки
df.rename(
columns={"Oldname": "Newname"},
inplace=True
)
# объединение нескольких условий выборки
df[df.col1 > 10 & df.col2 < 100] # можно еще | для OR
Explanation: Вопрос: как сделать “шахматную доску” из нулей и единиц?
Упражнения
Допустим, у нас есть две двумерные матрицы 5x2 и 3x2 (можно заполнить случайными числами). Нужно посчитать произведение этих матриц. Надо ли транспонировать вторую?
Посчитать коэффициент корреляции Пирсона между первыми двумя колонками первой матрицы. Какая функция numpy в этом поможет?
Pandas
http://pandas.pydata.org/
pip install pandas
основные понятия - Dataframe и Series
Плюшки
Схожий интерфейс с R (для олдскульных аналитиков)
Бесшовная интеграция с Numpy и Matplotlib
Легкое добавление колонок («фич»)
Удобные механизмы для заполнения пробелов в данных
End of explanation
# интеграция в IPython
%matplotlib inline
# основной объект
from matplotlib import pyplot as plt
# столбиковая диаграмма прямо из Pandas
df["Column"].plot(kind="bar")
Explanation: Упражнение
Прочитать файл USlocalopendataportals.csv в Pandas.
Сколько различных значений в колонке Location?
Сколько порталов принадлежит государству?
Выбрать все записи только для Location, начинающихся с “New York” и “Washington D.C.”.
Какой сайт встречается в этих данных дважды?
Как это показать кодом?
“Починить” колонку Population и посчитать среднее население для каждой локации.
Визуализация
Matplotlib (http://matplotlib.org/)
Seaborn (https://www.stanford.edu/~mwaskom/software/seaborn/ )
ggplot (http://ggplot.yhathq.com/ )
D3.js (https://d3js.org/ )
Bokeh (http://bokeh.pydata.org/en/latest/ )
Plotly (https://plot.ly/ )
Pygal (http://pygal.org/ )
Grafana (https://grafana.com/ )
Kibana (https://www.elastic.co/products/kibana )
End of explanation
# http://scikit-learn.org/stable/auto_examples/linear_model/plot_ols.html
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets, linear_model
from sklearn.metrics import mean_squared_error, r2_score
# Load the diabetes dataset
diabetes = datasets.load_diabetes()
# Use only one feature
diabetes_X = diabetes.data[:, np.newaxis, 2]
# Split the data into training/testing sets
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
# Split the targets into training/testing sets
diabetes_y_train = diabetes.target[:-20]
diabetes_y_test = diabetes.target[-20:]
# Create linear regression object
regr = linear_model.LinearRegression()
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
# Make predictions using the testing set
diabetes_y_pred = regr.predict(diabetes_X_test)
# The coefficients
print('Coefficients: \n', regr.coef_)
# The mean squared error
print("Mean squared error: %.2f"
% mean_squared_error(diabetes_y_test, diabetes_y_pred))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % r2_score(diabetes_y_test, diabetes_y_pred))
# Plot outputs
plt.scatter(diabetes_X_test, diabetes_y_test, color='black')
plt.plot(diabetes_X_test, diabetes_y_pred, color='blue', linewidth=3)
plt.xticks(())
plt.yticks(())
plt.show()
Explanation: Упражнения с Matplotlib
По датафрейму, построенному в последнем пункте предыдущего задания, построить график с горизонтальными столбиками
Взять случайную выборку (скажем, 10) по локациям и населению из прошлых упражнений. С помощью объекта pyplot построить круговую диаграмму по ним.
Подсказка - http://bit.ly/1eLirnL
Scikit-Learn
http://scikit-learn.org/stable/index.html
Обширная библиотека алгоритмов для Machine Learning
http://scikit-learn.org/stable/modules/clustering.html#k-means
Линейная регрессия
End of explanation |
11,814 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's look at seemingly impossible example where 80% of hidden, true positive labels have been flipped to negative! Also, let 15% of negative labels be flipped to positive!
Feel free to adjust these noise rates. But remember --> frac_neg2pos + frac_neg2pos < 1, i.e. $\rho_1 + \rho_0 < 1$
Step1: Comparing models using a logistic regression classifier.
Step2: In the above example, you see that for very simple 2D Gaussians, Rank Pruning performs similarly to Nat13.
Now let's look at a slightly more realistic scenario with 100-Dimensional data and a more complex classifier.
Below you see that Rank Pruning greatly outperforms other models.
Comparing models using a CNN classifier.
Note, this particular CNN's architecture is for MNIST / CIFAR image detection and may not be appropriate for this synthetic dataset. A simple, fully connected regular deep neural network is likely suitable. We only use it here for the purpose of showing that Rank Pruning works for any probabilistic classifier, as long as it has clf.predict(), clf.predict_proba(), and clf.fit() defined.
This section requires keras and tensorflow packages installed. See git repo for instructions in README. | Python Code:
# Choose mislabeling noise rates.
frac_pos2neg = 0.8 # rh1, P(s=0|y=1) in literature
frac_neg2pos = 0.15 # rh0, P(s=1|y=0) in literature
# Combine data into training examples and labels
data = neg.append(pos)
X = data[["x1","x2"]].values
y = data["label"].values
# Noisy P̃Ñ learning: instead of target y, we have s containing mislabeled examples.
# First, we flip positives, then negatives, then combine.
# We assume labels are flipped by some noise process uniformly randomly within each class.
s = y * (np.cumsum(y) <= (1 - frac_pos2neg) * sum(y))
s_only_neg_mislabeled = 1 - (1 - y) * (np.cumsum(1 - y) <= (1 - frac_neg2pos) * sum(1 - y))
s[y==0] = s_only_neg_mislabeled[y==0]
# Create testing dataset
neg_test = multivariate_normal(mean=[2,2], cov=[[10,-1.5],[-1.5,5]], size=2000)
pos_test = multivariate_normal(mean=[5,5], cov=[[1.5,1.3],[1.3,4]], size=1000)
X_test = np.concatenate((neg_test, pos_test))
y_test = np.concatenate((np.zeros(len(neg_test)), np.ones(len(pos_test))))
# Create and fit Rank Pruning object using any clf
# of your choice as long as it has predict_proba() defined
rp = RankPruning(clf = LogisticRegression())
# rp.fit(X, s, positive_lb_threshold=1-frac_pos2neg, negative_ub_threshold=frac_neg2pos)
rp.fit(X, s)
actual_py1 = sum(y) / float(len(y))
actual_ps1 = sum(s) / float(len(s))
actual_pi1 = frac_neg2pos * (1 - actual_py1) / float(actual_ps1)
actual_pi0 = frac_pos2neg * actual_py1 / (1 - actual_ps1)
print("What are rho1, rho0, pi1, and pi0?")
print("----------------------------------")
print("rho1 (frac_pos2neg) is the fraction of positive examples mislabeled as negative examples.")
print("rho0 (frac_neg2pos) is the fraction of negative examples mislabeled as positive examples.")
print("pi1 is the fraction of mislabeled examples in observed noisy P.")
print("pi0 is the fraction of mislabeled examples in observed noisy N.")
print()
print("Given (rho1, pi1), (rho1, rho), (rho0, pi0), or (pi0, pi1) the other two are known.")
print()
print("Using Rank Pruning, we estimate rho1, rh0, pi1, and pi0:")
print("--------------------------------------------------------------")
print("Estimated rho1, P(s = 0 | y = 1):", round(rp.rh1, 2), "\t| Actual:", round(frac_pos2neg, 2))
print("Estimated rho0, P(s = 1 | y = 0):", round(rp.rh0, 2), "\t| Actual:", round(frac_neg2pos, 2))
print("Estimated pi1, P(y = 0 | s = 1):", round(rp.pi1, 2), "\t| Actual:", round(actual_pi1, 2))
print("Estimated pi0, P(y = 1 | s = 0):", round(rp.pi0, 2), "\t| Actual:", round(actual_pi0, 2))
print("Estimated py1, P(y = 1):", round(rp.py1, 2), "\t\t| Actual:", round(actual_py1, 2))
print("Actual k1 (Number of items to remove from P̃):", actual_pi1 * sum(s))
print("Acutal k0 (Number of items to remove from Ñ):", actual_pi0 * (len(s) - sum(s)))
Explanation: Let's look at seemingly impossible example where 80% of hidden, true positive labels have been flipped to negative! Also, let 15% of negative labels be flipped to positive!
Feel free to adjust these noise rates. But remember --> frac_neg2pos + frac_neg2pos < 1, i.e. $\rho_1 + \rho_0 < 1$
End of explanation
# For shorter notation use rh1 and rh0 for noise rates.
clf = LogisticRegression()
rh1 = frac_pos2neg
rh0 = frac_neg2pos
models = {
"Rank Pruning" : RankPruning(clf = clf),
"Baseline" : other_pnlearning_methods.BaselineNoisyPN(clf),
"Rank Pruning (noise rates given)": RankPruning(rh1, rh0, clf),
"Elk08 (noise rates given)": other_pnlearning_methods.Elk08(e1 = 1 - rh1, clf = clf),
"Liu16 (noise rates given)": other_pnlearning_methods.Liu16(rh1, rh0, clf),
"Nat13 (noise rates given)": other_pnlearning_methods.Nat13(rh1, rh0, clf),
}
for key in models.keys():
model = models[key]
model.fit(X, s)
pred = model.predict(X_test)
pred_proba = model.predict_proba(X_test) # Produces P(y=1|x)
print("\n%s Model Performance:\n==============================\n" % key)
print("Accuracy:", acc(y_test, pred))
print("Precision:", prfs(y_test, pred)[0])
print("Recall:", prfs(y_test, pred)[1])
print("F1 score:", prfs(y_test, pred)[2])
Explanation: Comparing models using a logistic regression classifier.
End of explanation
num_features = 100
# Create training dataset - this synthetic dataset is not necessarily
# appropriate for the CNN. This is for demonstrative purposes.
# A fully connected regular neural network is more appropriate.
neg = multivariate_normal(mean=[0]*num_features, cov=np.eye(num_features), size=5000)
pos = multivariate_normal(mean=[0.5]*num_features, cov=np.eye(num_features), size=4000)
X = np.concatenate((neg, pos))
y = np.concatenate((np.zeros(len(neg)), np.ones(len(pos))))
# Again, s is the noisy labels, we flip y randomly using noise rates.
s = y * (np.cumsum(y) <= (1 - frac_pos2neg) * sum(y))
s_only_neg_mislabeled = 1 - (1 - y) * (np.cumsum(1 - y) <= (1 - frac_neg2pos) * sum(1 - y))
s[y==0] = s_only_neg_mislabeled[y==0]
# Create testing dataset
neg_test = multivariate_normal(mean=[0]*num_features, cov=np.eye(num_features), size=1000)
pos_test = multivariate_normal(mean=[0.4]*num_features, cov=np.eye(num_features), size=800)
X_test = np.concatenate((neg_test, pos_test))
y_test = np.concatenate((np.zeros(len(neg_test)), np.ones(len(pos_test))))
from classifier_cnn import CNN
clf = CNN(img_shape = (num_features/10, num_features/10), epochs = 1)
rh1 = frac_pos2neg
rh0 = frac_neg2pos
models = {
"Rank Pruning" : RankPruning(clf = clf),
"Baseline" : other_pnlearning_methods.BaselineNoisyPN(clf),
"Rank Pruning (noise rates given)": RankPruning(rh1, rh0, clf),
"Elk08 (noise rates given)": other_pnlearning_methods.Elk08(e1 = 1 - rh1, clf = clf),
"Liu16 (noise rates given)": other_pnlearning_methods.Liu16(rh1, rh0, clf),
"Nat13 (noise rates given)": other_pnlearning_methods.Nat13(rh1, rh0, clf),
}
print("Train all models first. Results will print at end.")
preds = {}
for key in models.keys():
print("Training model: ", key)
model = models[key]
model.fit(X, s)
pred = model.predict(X_test)
pred_proba = model.predict_proba(X_test) # Produces P(y=1|x)
preds[key] = pred
print("Comparing models using a CNN classifier.")
for key in models.keys():
pred = preds[key]
print("\n%s Model Performance:\n==============================\n" % key)
print("Accuracy:", acc(y_test, pred))
print("Precision:", prfs(y_test, pred)[0])
print("Recall:", prfs(y_test, pred)[1])
print("F1 score:", prfs(y_test, pred)[2])
Explanation: In the above example, you see that for very simple 2D Gaussians, Rank Pruning performs similarly to Nat13.
Now let's look at a slightly more realistic scenario with 100-Dimensional data and a more complex classifier.
Below you see that Rank Pruning greatly outperforms other models.
Comparing models using a CNN classifier.
Note, this particular CNN's architecture is for MNIST / CIFAR image detection and may not be appropriate for this synthetic dataset. A simple, fully connected regular deep neural network is likely suitable. We only use it here for the purpose of showing that Rank Pruning works for any probabilistic classifier, as long as it has clf.predict(), clf.predict_proba(), and clf.fit() defined.
This section requires keras and tensorflow packages installed. See git repo for instructions in README.
End of explanation |
11,815 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lago SDK Example - one VM one Network
Step2: Create a LagoInitFile, normally this file should be saved to the disk. Here we will use a temporary file instead. Our environment includes one CentOS 7.3 VM with one network.
Step3: Now we will initialize the environment by using the init file. Our workdir will be created automatically if it does not exists. If this is the first time you are running Lago, it might take a while as it will download the CentOS 7.3 template. You can monitor its progress by watching the log file we configured in /tmp/lago.log.
Step4: When the method returns, the environment can be started
Step5: Check which VMs are available and get some meta data
Step6: Executing commands in the VM can be done with ssh method
Step7: Lets stop the environment, here we will use the destroy method, however you may also use stop and start if you would like to turn the environment off. | Python Code:
import logging
import tempfile
from textwrap import dedent
from lago import sdk
Explanation: Lago SDK Example - one VM one Network
End of explanation
with tempfile.NamedTemporaryFile(delete=False) as init_file:
init_file.write(dedent(
domains:
vm-01:
memory: 1024
nics:
- net: net-01
disks:
- template_name: el7.3-base
type: template
name: root
dev: sda
format: qcow2
nets:
net-01:
type: nat
dhcp:
start: 100
end: 254
))
Explanation: Create a LagoInitFile, normally this file should be saved to the disk. Here we will use a temporary file instead. Our environment includes one CentOS 7.3 VM with one network.
End of explanation
env = sdk.init(config=init_file.name,
workdir='/tmp/lago_sdk_simple_example',
loglevel=logging.DEBUG,
log_fname='/tmp/lago.log')
Explanation: Now we will initialize the environment by using the init file. Our workdir will be created automatically if it does not exists. If this is the first time you are running Lago, it might take a while as it will download the CentOS 7.3 template. You can monitor its progress by watching the log file we configured in /tmp/lago.log.
End of explanation
env.start()
Explanation: When the method returns, the environment can be started:
End of explanation
vms = env.get_vms()
print vms
vm = vms['vm-01']
vm.distro()
vm.ip()
Explanation: Check which VMs are available and get some meta data:
End of explanation
res = vm.ssh(['hostname', '-f'])
res
Explanation: Executing commands in the VM can be done with ssh method:
End of explanation
env.destroy()
Explanation: Lets stop the environment, here we will use the destroy method, however you may also use stop and start if you would like to turn the environment off.
End of explanation |
11,816 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute MNE-dSPM inverse solution on single epochs
Compute dSPM inverse solution on single trial epochs restricted
to a brain label.
Step1: View activation time-series to illustrate the benefit of aligning/flipping
Step2: Viewing single trial dSPM and average dSPM for unflipped pooling over label
Compare to (1) Inverse (dSPM) then average, (2) Evoked then dSPM | Python Code:
# Author: Alexandre Gramfort <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
from mne.minimum_norm import apply_inverse_epochs, read_inverse_operator
from mne.minimum_norm import apply_inverse
print(__doc__)
data_path = sample.data_path()
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
fname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
fname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
event_id, tmin, tmax = 1, -0.2, 0.5
# Using the same inverse operator when inspecting single trials Vs. evoked
snr = 3.0 # Standard assumption for average data but using it for single trial
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
# Load data
inverse_operator = read_inverse_operator(fname_inv)
label = mne.read_label(fname_label)
raw = mne.io.read_raw_fif(fname_raw)
events = mne.read_events(fname_event)
# Set up pick list
include = []
# Add a bad channel
raw.info['bads'] += ['EEG 053'] # bads + 1 more
# pick MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=include, exclude='bads')
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,
eog=150e-6))
# Get evoked data (averaging across trials in sensor space)
evoked = epochs.average()
# Compute inverse solution and stcs for each epoch
# Use the same inverse operator as with evoked data (i.e., set nave)
# If you use a different nave, dSPM just scales by a factor sqrt(nave)
stcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, label,
pick_ori="normal", nave=evoked.nave)
stc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori="normal")
stc_evoked_label = stc_evoked.in_label(label)
# Mean across trials but not across vertices in label
mean_stc = sum(stcs) / len(stcs)
# compute sign flip to avoid signal cancellation when averaging signed values
flip = mne.label_sign_flip(label, inverse_operator['src'])
label_mean = np.mean(mean_stc.data, axis=0)
label_mean_flip = np.mean(flip[:, np.newaxis] * mean_stc.data, axis=0)
# Get inverse solution by inverting evoked data
stc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,
pick_ori="normal")
# apply_inverse() does whole brain, so sub-select label of interest
stc_evoked_label = stc_evoked.in_label(label)
# Average over label (not caring to align polarities here)
label_mean_evoked = np.mean(stc_evoked_label.data, axis=0)
Explanation: Compute MNE-dSPM inverse solution on single epochs
Compute dSPM inverse solution on single trial epochs restricted
to a brain label.
End of explanation
times = 1e3 * stcs[0].times # times in ms
plt.figure()
h0 = plt.plot(times, mean_stc.data.T, 'k')
h1, = plt.plot(times, label_mean, 'r', linewidth=3)
h2, = plt.plot(times, label_mean_flip, 'g', linewidth=3)
plt.legend((h0[0], h1, h2), ('all dipoles in label', 'mean',
'mean with sign flip'))
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.show()
Explanation: View activation time-series to illustrate the benefit of aligning/flipping
End of explanation
# Single trial
plt.figure()
for k, stc_trial in enumerate(stcs):
plt.plot(times, np.mean(stc_trial.data, axis=0).T, 'k--',
label='Single Trials' if k == 0 else '_nolegend_',
alpha=0.5)
# Single trial inverse then average.. making linewidth large to not be masked
plt.plot(times, label_mean, 'b', linewidth=6,
label='dSPM first, then average')
# Evoked and then inverse
plt.plot(times, label_mean_evoked, 'r', linewidth=2,
label='Average first, then dSPM')
plt.xlabel('time (ms)')
plt.ylabel('dSPM value')
plt.legend()
plt.show()
Explanation: Viewing single trial dSPM and average dSPM for unflipped pooling over label
Compare to (1) Inverse (dSPM) then average, (2) Evoked then dSPM
End of explanation |
11,817 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Create some text
Step2: Apply regex | Python Code:
# Load regex package
import re
Explanation: Title: Match Email Addresses
Slug: match_email_addresses
Summary: Match Email Addresses
Date: 2016-05-01 12:00
Category: Regex
Tags: Basics
Authors: Chris Albon
Based on: StackOverflow
Preliminaries
End of explanation
# Create a variable containing a text string
text = 'My email is [email protected], thanks! No, I am at [email protected].'
Explanation: Create some text
End of explanation
# Find all email addresses
re.findall(r'[a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9]+', text)
# Explanation:
# This regex has three parts
# [a-zA-Z0-9_.+-]+ Matches a word (the username) of any length
# @[a-zA-Z0-9-]+ Matches a word (the domain name) of any length
# \.[a-zA-Z0-9-.]+ Matches a word (the TLD) of any length
Explanation: Apply regex
End of explanation |
11,818 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summarize the reviews
The idea in this solution is to provide a new feature to the customer which will reduce the need to go through several reviews in order to evaluate a product. In order to achieve that, we will attempt to extract the most predictive words or sentences from the ratings and present them in a nice format (e.g. wordcloud).
Implementation steps of a proof of concept
Extract the summaries and split them to words
Keep only the data with ranks 1, 2 -labeled as 0- and 5 -labeled as 1.
Generate tf-idf vector features from the words
Train a binary logistic regression model which predicts the rankings from the vector features
Using this model evaluate each word by generating the features for it as if it were a whole summary
Order the words by the probability generated by the model to be in the '0' or '1' category
Select the words with highest probability to be '1' as the positive ones
Select the words with highest probability to be '0' as the negative ones
Pick a random set of products and print the top 10 words with highest probabilities (max of positive and negative) on a wordcloud
Loading and preparing the data
Step1: Splitting data and balancing skewness
Step2: Benchmark
Step3: Learning pipeline
Step4: Testing the model accuracy
Step5: Using model to extract the most predictive words
Step6: Summarize single product - picks the best and worst | Python Code:
all_reviews = (spark
.read
.json('./data/raw_data/reviews_Baby_5.json.gz',)
.na
.fill({ 'reviewerName': 'Unknown' }))
from pyspark.sql.functions import col, expr, udf, trim
from pyspark.sql.types import IntegerType
import re
remove_punctuation = udf(lambda line: re.sub('[^A-Za-z\s]', '', line))
make_binary = udf(lambda rating: 0 if rating in [1, 2] else 1, IntegerType())
reviews = (all_reviews
.filter(col('overall').isin([1, 2, 5]))
.withColumn('label', make_binary(col('overall')))
.select(col('label').cast('int'), remove_punctuation('summary').alias('summary'))
.filter(trim(col('summary')) != ''))
Explanation: Summarize the reviews
The idea in this solution is to provide a new feature to the customer which will reduce the need to go through several reviews in order to evaluate a product. In order to achieve that, we will attempt to extract the most predictive words or sentences from the ratings and present them in a nice format (e.g. wordcloud).
Implementation steps of a proof of concept
Extract the summaries and split them to words
Keep only the data with ranks 1, 2 -labeled as 0- and 5 -labeled as 1.
Generate tf-idf vector features from the words
Train a binary logistic regression model which predicts the rankings from the vector features
Using this model evaluate each word by generating the features for it as if it were a whole summary
Order the words by the probability generated by the model to be in the '0' or '1' category
Select the words with highest probability to be '1' as the positive ones
Select the words with highest probability to be '0' as the negative ones
Pick a random set of products and print the top 10 words with highest probabilities (max of positive and negative) on a wordcloud
Loading and preparing the data
End of explanation
train, test = reviews.randomSplit([.8, .2], seed=5436L)
def multiply_dataset(dataset, n):
return dataset if n <= 1 else dataset.union(multiply_dataset(dataset, n - 1))
reviews_good = train.filter('label == 1')
reviews_bad = train.filter('label == 0')
reviews_bad_multiplied = multiply_dataset(reviews_bad, reviews_good.count() / reviews_bad.count())
train_reviews = reviews_bad_multiplied.union(reviews_good)
Explanation: Splitting data and balancing skewness
End of explanation
accuracy = reviews_good.count() / float(train_reviews.count())
print('Always predicting 5 stars accuracy: {0}'.format(accuracy))
Explanation: Benchmark: predict by distribution
End of explanation
from pyspark.ml.feature import Tokenizer, HashingTF, IDF, StopWordsRemover
from pyspark.ml.pipeline import Pipeline
from pyspark.ml.classification import LogisticRegression
tokenizer = Tokenizer(inputCol='summary', outputCol='words')
pipeline = Pipeline(stages=[
tokenizer,
StopWordsRemover(inputCol='words', outputCol='filtered_words')
HashingTF(inputCol='filtered_words', outputCol='rawFeatures', numFeatures=120000),
IDF(inputCol='rawFeatures', outputCol='features'),
LogisticRegression(regParam=.3, elasticNetParam=.01)
])
Explanation: Learning pipeline
End of explanation
model = pipeline.fit(train_reviews)
from pyspark.ml.evaluation import BinaryClassificationEvaluator
prediction = model.transform(test)
BinaryClassificationEvaluator().evaluate(prediction)
Explanation: Testing the model accuracy
End of explanation
from pyspark.sql.functions import explode
import pyspark.sql.functions as F
from pyspark.sql.types import FloatType
words = (tokenizer
.transform(reviews)
.select(explode(col('words')).alias('summary')))
predictors = (model
.transform(words)
.select(col('summary').alias('word'), 'probability'))
first = udf(lambda x: x[0].item(), FloatType())
second = udf(lambda x: x[1].item(), FloatType())
predictive_words = (predictors
.select(
'word',
second(col('probability')).alias('positive'),
first(col('probability')).alias('negative'))
.groupBy('word')
.agg(
F.max('positive').alias('positive'),
F.max('negative').alias('negative')))
positive_predictive_words = (predictive_words
.select(col('word').alias('positive_word'), col('positive').alias('pos_prob'))
.sort('pos_prob', ascending=False))
negative_predictive_words = (predictive_words
.select(col('word').alias('negative_word'), col('negative').alias('neg_prob'))
.sort('neg_prob', ascending=False))
import pandas as pd
pd.concat([
positive_predictive_words.toPandas().head(n=20),
negative_predictive_words.toPandas().head(n=20) ],
axis=1)
Explanation: Using model to extract the most predictive words
End of explanation
full_model = pipeline.fit(reviews)
highly_reviewed_products = (all_reviews
.groupBy('asin')
.agg(F.count('asin').alias('count'), F.avg('overall').alias('avg_rating'))
.filter('count > 25'))
best_product = highly_reviewed_products.sort('avg_rating', ascending=False).take(1)[0][0]
worst_product = highly_reviewed_products.sort('avg_rating').take(1)[0][0]
def most_contributing_summaries(product, total_reviews, ranking_model):
reviews = total_reviews.filter(col('asin') == product).select('summary', 'overall')
udf_max = udf(lambda p: max(p.tolist()), FloatType())
summary_ranks = (ranking_model
.transform(reviews)
.select(
'summary',
second(col('probability')).alias('pos_prob')))
pos_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob', ascending=False).take(10) }
neg_summaries = { row[0]: row[1] for row in summary_ranks.sort('pos_prob').take(10) }
return pos_summaries, neg_summaries
from wordcloud import WordCloud
import matplotlib.pyplot as plt
def present_product(product, total_reviews, ranking_model):
pos_summaries, neg_summaries = most_contributing_summaries(product, total_reviews, ranking_model)
pos_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(pos_summaries)
neg_wordcloud = WordCloud(background_color='white', max_words=20).fit_words(neg_summaries)
fig = plt.figure(figsize=(15, 15))
ax = fig.add_subplot(1,2,1)
ax.set_title('Positive summaries')
ax.imshow(pos_wordcloud, interpolation='bilinear')
ax.axis('off')
ax = fig.add_subplot(1,2,2)
ax.set_title('Negative summaries')
ax.imshow(neg_wordcloud, interpolation='bilinear')
ax.axis('off')
plt.show()
present_product(best_product, all_reviews, full_model)
present_product(worst_product, all_reviews, full_model)
Explanation: Summarize single product - picks the best and worst
End of explanation |
11,819 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Started some more runs with extra MLP layers, with varying levels of Dropout in each. Casually looking at the traces when they were running, it looked like the model with lower dropout converged faster, but the second with higher dropout reached a better final validation score.
Step1: Lower Dropout
This model had two MLP layers, each with dropout set to 0.8.
Step2: It's definitely overfitting.
Higher Dropout
The same model, with dropout set to 0.5 as we've been doing so far
Step3: It takes longer to reach a slightly lower validation score, but does not overfit.
How slow is dropout?
If we look at the difference in time to pass a validation score over the range we can see how much longer it takes the model using higher dropout. | Python Code:
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics
Explanation: Started some more runs with extra MLP layers, with varying levels of Dropout in each. Casually looking at the traces when they were running, it looked like the model with lower dropout converged faster, but the second with higher dropout reached a better final validation score.
End of explanation
m = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/8aug_extra_layers0p8_recent.pkl")
nll_channels = [c for c in m.monitor.channels.keys() if 'nll' in c]
def make_curves(model, *args):
curves = None
for c in args:
channel = model.monitor.channels[c]
c = c[0].upper() + c[1:]
if not curves:
curves = hl.Curve(zip(channel.example_record,channel.val_record),group=c)
else:
curves += hl.Curve(zip(channel.example_record,channel.val_record),group=c)
return curves
make_curves(m,*nll_channels)
Explanation: Lower Dropout
This model had two MLP layers, each with dropout set to 0.8.
End of explanation
mh = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/8aug_extra_layers0p5_recent.pkl")
make_curves(mh,*nll_channels)
Explanation: It's definitely overfitting.
Higher Dropout
The same model, with dropout set to 0.5 as we've been doing so far:
End of explanation
cl = m.monitor.channels['valid_y_nll']
ch = mh.monitor.channels['valid_y_nll']
compare = []
for t,v in zip(cl.example_record,cl.val_record):
for t2,v2 in zip(ch.example_record,ch.val_record):
if v2 < v:
compare.append((float(v),np.max([t2-t,0])))
break
plt.plot(*zip(*compare))
plt.xlabel("valid_y_nll")
plt.ylabel("time difference")
Explanation: It takes longer to reach a slightly lower validation score, but does not overfit.
How slow is dropout?
If we look at the difference in time to pass a validation score over the range we can see how much longer it takes the model using higher dropout.
End of explanation |
11,820 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Note
Step1: Now I'll simulate it
Step2: The simulation was set up with a three-bit threshold having a value of three.
The counter has the same number of bits as the threshold, so it will cycle
over the values 0, 1, 2, $\ldots$, 6, 7, 0, 1, $\ldots$.
From the waveforms, you can see the PWM output is on for three out of every eight clock cycles.
One characteristic of this PWM is the total pulse duration (both on and off portions)
is restricted to being a power-of-two clock cycles.
The next PWM will remove that limitation.
A Less-Simple PWM
We can make the PWM more general with allowable intervals that are not powers of 2 by adding another comparator.
This comparator watches the counter value and rolls it back to zero once it reaches a given value.
The comparator is implemented with a small addition to the sequential logic of the simple PWM as follows
Step3: Now test the PWM with a non-power of 2 interval
Step4: The simulation shows the PWM pulse duration is five clock cycles with the output being high for three cycles and
low for the other two.
But there's a problem if the threshold is changed while the PWM is operating.
This can cause glitches under the wrong conditions.
To demonstrate this, a PWM with a pulse duration of ten cycles will have its threshold changed
from three to eight during the middle of a pulse by the simulation test bench shown below
Step5: At time $t = 20$, a pulse begins.
The PWM output is high for three clock cycles until $t = 26$ and then goes low.
At $t = 29$, the threshold increases from 3 to 8, exceeding the current counter value.
This makes the PWM output go high again and it stays there until the counter reaches the new threshold.
So there is a glitch from $t = 29$ to $t = 36$.
It's usually the case that every new problem can be fixed by adding a bit more hardware.
This case is no different.
A Glitch-less PWM
The glitch in the last PWM could have been avoided if we didn't allow the thresold to change
willy-nilly during a pulse.
We can prevent this by adding a register that stores the threshold and doesn't allow it
to change until the current pulse ends and a new one begins.
Step6: Now we can test it using the previous test bench
Step7: See? No more glitches!
As before, the threshold changes at $t = 29$ but the threshold register doesn't change until $t = 40$
when the pulse ends.
Which is Better?
So which one of these PWMs is "better"?
The metric I'll use here is the amount of FPGA resources each one uses.
First, I'll synthesize and compile the simple PWM with an eight-bit threshold
Step8: Looking at the stats, the simple PWM uses eight D flip-flops (3 DFF and 5 CARRY,DFF) which
is expected when using an eight-bit threshold.
It also uses sixteen carry circuits (10 CARRY, 5 CARRY,DFF and 1 CARRY PASS) since the counter
and the comparator are both eight bits wide and each bit needs a carry circuit.
So this all looks reasonable.
In total, the simple PWM uses 32 logic cells.
For the PWM with non power-of-two pulse duration, I'll use the same eight-bit threshold but
set the duration to 227
Step9: Note that the number of D flip-flops and carry circuits has stayed the same, but the
total number of logic cells consumed has risen to 36.
This PWM performs an additional equality comparison of the counter and the pulse duration input,
both of which are eight bits wide for a total of sixteen bits.
This computation can be done with four 4-input LUTs (plus another LUT to combine the outputs), so an increase of
four logic cells is reasonable.
Finally, here are the stats for the glitchless PWM
Step10: The glitchless PWM adds another eight-bit register to store the threshold, so the total
number of flip-flops has risen to sixteen (11 DFF and 5 CARRY,DFF).
Because some of these flip-flops can be combined into logic cells which weren't using
their DFFs, the total number of logic cells consumed only rises by five to 41.
So the final tally is
Step11: You might ask why delta is changed from -1 to +1 when ramp_o is one instead of waiting until it is zero.
The reason is that the new value of delta doesn't take effect until the next clock cycle.
Therefore, if ramp_o was allowed to hit 0 before making the change,
the current -1 value in delta would decrement ramp_o to -1.
This would correspond to a positive value of 255 (maximum
intensity for the LED) if ramp_o was eight bits wide.
Changing delta one cycle early prevents this mishap.
The same logic applies at the other end of the ramp as well (i.e., flip delta when the
counter is just below its maximum value).
When the bitstream is loaded, the FPGA also clears all its registers.
This means delta and ramp_o will both be zero with the result that nothing will happen.
Therefore, if delta is ever seen to be zero, then delta and ramp_o will both
be set to values that will get the triangle ramp going.
The previous two paragraphs illustrate an important principle
Step12: Now I'll simulate it using a six-bit ramp counter
Step13: The simulation doesn't show us much, but there are two items worth mentioning
Step14: Next, I have to assign some pins for the clock input and the LED output (LED D1)
Step15: Finally, I can compile the Verilog code and pin assignments into a bitstream and download it
into the iCEstick | Python Code:
from pygmyhdl import *
@chunk
def pwm_simple(clk_i, pwm_o, threshold):
'''
Inputs:
clk_i: PWM changes state on the rising edge of this clock input.
threshold: Bit-length determines counter width and value determines when output goes low.
Outputs:
pwm_o: PWM output starts and stays high until counter > threshold and then output goes low.
'''
cnt = Bus(len(threshold), name='cnt') # Create a counter with the same number of bits as the threshold.
# Here's the sequential logic for incrementing the counter. We've seen this before!
@seq_logic(clk_i.posedge)
def cntr_logic():
cnt.next = cnt + 1
# Combinational logic that drives the PWM output high when the counter is less than the threshold.
@comb_logic
def output_logic():
pwm_o.next = cnt < threshold # cnt<threshold evaluates to either True (1) or False (0).
Explanation: Note: If you're reading this as a static HTML page, you can also get it as an
executable Jupyter notebook here.
PWM
Pulse width modulators
(PWMs) output a repetitive waveform that is high for a set percentage
of the interval and low for the remainder.
One of their uses is to generate a quasi-analog signal using only a digital output pin.
This makes them useful for doing things like varying the brightness of an LED
by adjusting the amount of time the signal is high and the LED is on.
If you've used PWMs before on a microcontroller, you know what a headache it is to set all
the control bits to select the clock source, pulse durations, and so forth.
Surprisingly, PWMs are actually a bit easier with FPGAs.
Let's take a look.
A Simple PWM
A very simple PWM consists of a counter and a comparator.
The counter is incremented by a clock signal and its value is compared to a threshold.
When the counter is less than the threshold, the PWM output is high, otherwise it's low.
So the higher the threshold, the longer the output is on.
This on-off pulsing repeats every time the
counter rolls over and begins again at zero.
<img alt="PWM block diagram." src="pwm_block_diag.png" width="600" />
Here's the MyHDL code for a simple PWM:
End of explanation
initialize()
# Create signals and attach them to the PWM.
clk = Wire(name='clk')
pwm = Wire(name='pwm')
threshold = Bus(3, init_val=3) # Use a 3-bit threshold with a value of 3.
pwm_simple(clk, pwm, threshold)
# Pulse the clock and look at the PWM output.
clk_sim(clk, num_cycles=24)
show_waveforms(start_time=13, tock=True)
Explanation: Now I'll simulate it:
End of explanation
@chunk
def pwm_less_simple(clk_i, pwm_o, threshold, duration):
'''
Inputs:
clk_i: PWM changes state on the rising edge of this clock input.
threshold: Determines when output goes low.
duration: The length of the total pulse duration as determined by the counter.
Outputs:
pwm_o: PWM output starts and stays high until counter > threshold and then output goes low.
'''
# The log2 of the pulse duration determines the number of bits needed
# in the counter. The log2 value is rounded up to the next integer value.
import math
length = math.ceil(math.log(duration, 2))
cnt = Bus(length, name='cnt')
# Augment the counter with a comparator to adjust the pulse duration.
@seq_logic(clk_i.posedge)
def cntr_logic():
cnt.next = cnt + 1
# Reset the counter to zero once it reaches one less than the desired duration.
# So if the duration is 3, the counter will count 0, 1, 2, 0, 1, 2...
if cnt == duration-1:
cnt.next = 0
@comb_logic
def output_logic():
pwm_o.next = cnt < threshold
Explanation: The simulation was set up with a three-bit threshold having a value of three.
The counter has the same number of bits as the threshold, so it will cycle
over the values 0, 1, 2, $\ldots$, 6, 7, 0, 1, $\ldots$.
From the waveforms, you can see the PWM output is on for three out of every eight clock cycles.
One characteristic of this PWM is the total pulse duration (both on and off portions)
is restricted to being a power-of-two clock cycles.
The next PWM will remove that limitation.
A Less-Simple PWM
We can make the PWM more general with allowable intervals that are not powers of 2 by adding another comparator.
This comparator watches the counter value and rolls it back to zero once it reaches a given value.
The comparator is implemented with a small addition to the sequential logic of the simple PWM as follows:
End of explanation
initialize()
clk = Wire(name='clk')
pwm = Wire(name='pwm')
pwm_less_simple(clk, pwm, threshold=3, duration=5)
clk_sim(clk, num_cycles=15)
show_waveforms()
Explanation: Now test the PWM with a non-power of 2 interval:
End of explanation
initialize()
clk = Wire(name='clk')
pwm = Wire(name='pwm')
threshold = Bus(4, name='threshold')
pwm_less_simple(clk, pwm, threshold, 10)
def test_bench(num_cycles):
clk.next = 0
threshold.next = 3 # Start with threshold of 3.
yield delay(1)
for cycle in range(num_cycles):
clk.next = 0
# Raise the threshold to 8 after 15 cycles.
if cycle >= 14:
threshold.next = 8
yield delay(1)
clk.next = 1
yield delay(1)
# Simulate for 20 clocks and show a specific section of the waveforms.
simulate(test_bench(20))
show_waveforms(tick=True, start_time=19)
Explanation: The simulation shows the PWM pulse duration is five clock cycles with the output being high for three cycles and
low for the other two.
But there's a problem if the threshold is changed while the PWM is operating.
This can cause glitches under the wrong conditions.
To demonstrate this, a PWM with a pulse duration of ten cycles will have its threshold changed
from three to eight during the middle of a pulse by the simulation test bench shown below:
End of explanation
@chunk
def pwm_glitchless(clk_i, pwm_o, threshold, interval):
import math
length = math.ceil(math.log(interval, 2))
cnt = Bus(length)
threshold_r = Bus(length, name='threshold_r') # Create a register to hold the threshold value.
@seq_logic(clk_i.posedge)
def cntr_logic():
cnt.next = cnt + 1
if cnt == interval-1:
cnt.next = 0
threshold_r.next = threshold # The threshold only changes at the end of a pulse.
@comb_logic
def output_logic():
pwm_o.next = cnt < threshold_r
Explanation: At time $t = 20$, a pulse begins.
The PWM output is high for three clock cycles until $t = 26$ and then goes low.
At $t = 29$, the threshold increases from 3 to 8, exceeding the current counter value.
This makes the PWM output go high again and it stays there until the counter reaches the new threshold.
So there is a glitch from $t = 29$ to $t = 36$.
It's usually the case that every new problem can be fixed by adding a bit more hardware.
This case is no different.
A Glitch-less PWM
The glitch in the last PWM could have been avoided if we didn't allow the thresold to change
willy-nilly during a pulse.
We can prevent this by adding a register that stores the threshold and doesn't allow it
to change until the current pulse ends and a new one begins.
End of explanation
initialize()
clk = Wire(name='clk')
pwm = Wire(name='pwm')
threshold = Bus(4, name='threshold')
pwm_glitchless(clk, pwm, threshold, 10)
simulate(test_bench(22))
show_waveforms(tick=True, start_time=19)
Explanation: Now we can test it using the previous test bench:
End of explanation
threshold = Bus(8)
toVerilog(pwm_simple, clk, pwm, threshold)
!yosys -q -p "synth_ice40 -blif pwm_simple.blif" pwm_simple.v
!arachne-pnr -d 1k pwm_simple.blif -o pwm_simple.asc
Explanation: See? No more glitches!
As before, the threshold changes at $t = 29$ but the threshold register doesn't change until $t = 40$
when the pulse ends.
Which is Better?
So which one of these PWMs is "better"?
The metric I'll use here is the amount of FPGA resources each one uses.
First, I'll synthesize and compile the simple PWM with an eight-bit threshold:
End of explanation
toVerilog(pwm_less_simple, clk, pwm, threshold, 227)
!yosys -q -p "synth_ice40 -blif pwm_less_simple.blif" pwm_less_simple.v
!arachne-pnr -d 1k pwm_less_simple.blif -o pwm_less_simple.asc
Explanation: Looking at the stats, the simple PWM uses eight D flip-flops (3 DFF and 5 CARRY,DFF) which
is expected when using an eight-bit threshold.
It also uses sixteen carry circuits (10 CARRY, 5 CARRY,DFF and 1 CARRY PASS) since the counter
and the comparator are both eight bits wide and each bit needs a carry circuit.
So this all looks reasonable.
In total, the simple PWM uses 32 logic cells.
For the PWM with non power-of-two pulse duration, I'll use the same eight-bit threshold but
set the duration to 227:
End of explanation
toVerilog(pwm_glitchless, clk, pwm, threshold, 227)
!yosys -q -p "synth_ice40 -blif pwm_glitchless.blif" pwm_glitchless.v
!arachne-pnr -d 1k pwm_glitchless.blif -o pwm_glitchless.asc
Explanation: Note that the number of D flip-flops and carry circuits has stayed the same, but the
total number of logic cells consumed has risen to 36.
This PWM performs an additional equality comparison of the counter and the pulse duration input,
both of which are eight bits wide for a total of sixteen bits.
This computation can be done with four 4-input LUTs (plus another LUT to combine the outputs), so an increase of
four logic cells is reasonable.
Finally, here are the stats for the glitchless PWM:
End of explanation
@chunk
def ramp(clk_i, ramp_o):
'''
Inputs:
clk_i: Clock input.
Outputs:
ramp_o: Multi-bit amplitude of ramp.
'''
# Delta is the increment (+1) or decrement (-1) for the counter.
delta = Bus(len(ramp_o))
@seq_logic(clk_i.posedge)
def logic():
# Add delta to the current ramp value to get the next ramp value.
ramp_o.next = ramp_o + delta
# When the ramp reaches the bottom, set delta to +1 to start back up the ramp.
if ramp_o == 1:
delta.next = 1
# When the ramp reaches the top, set delta to -1 to start back down the ramp.
elif ramp_o == ramp_o.max-2:
delta.next = -1
# After configuring the FPGA, the delta register is set to zero.
# Set it to +1 and set the ramp value to +1 to get things going.
elif delta == 0:
delta.next = 1
ramp_o.next = 1
Explanation: The glitchless PWM adds another eight-bit register to store the threshold, so the total
number of flip-flops has risen to sixteen (11 DFF and 5 CARRY,DFF).
Because some of these flip-flops can be combined into logic cells which weren't using
their DFFs, the total number of logic cells consumed only rises by five to 41.
So the final tally is:
| PWM Type | LCs | DFFs | Carrys |
|:-----------|:----:|:-----:|:-------:|
| Simple | 32 | 8 | 16 |
| Non $2^N$ | 36 | 8 | 16 |
| Glitchless | 41 | 16 | 16 |
Resource usage is only one metric that affects your choice of a PWM.
For some non-precision applications, such as varying the intensity of an LED's brightness,
a simple PWM is fine and will save space in the FPGA.
For more demanding applications, like motor control, you might opt for the glitchless PWM
and take the hit on the number of LCs consumed.
Demo Time!
It would be a shame to go through all this work and then never do anything fun with it!
Let's make a demo that gradually brightens and darkens an LED on the iCEstick board
(instead of snapping it on and off like our previous examples).
The basic idea is to generate a triangular ramp using a counter that repetitively increments from 0 to $N$ and
then decrements back to 0.
Then connect the upper bits of the counter to the threshold input of a simple PWM and connect an LED
to the output.
As the counter ramps up and down, the threshold will increase and decrease and the
LED intensity will wax and wane.
<img alt="Triangular ramp, threshold ramp, and PWM output." src="ramped_pwm.png" width="800 px" />
Here's the sequential logic for the ramp generator:
End of explanation
@chunk
def wax_wane(clk_i, led_o, length):
rampout = Bus(length, name='ramp') # Create the triangle ramp counter register.
ramp(clk_i, rampout) # Generate the ramp.
pwm_simple(clk_i, led_o, rampout.o[length:length-4]) # Use the upper 4 ramp bits to drive the PWM threshold
Explanation: You might ask why delta is changed from -1 to +1 when ramp_o is one instead of waiting until it is zero.
The reason is that the new value of delta doesn't take effect until the next clock cycle.
Therefore, if ramp_o was allowed to hit 0 before making the change,
the current -1 value in delta would decrement ramp_o to -1.
This would correspond to a positive value of 255 (maximum
intensity for the LED) if ramp_o was eight bits wide.
Changing delta one cycle early prevents this mishap.
The same logic applies at the other end of the ramp as well (i.e., flip delta when the
counter is just below its maximum value).
When the bitstream is loaded, the FPGA also clears all its registers.
This means delta and ramp_o will both be zero with the result that nothing will happen.
Therefore, if delta is ever seen to be zero, then delta and ramp_o will both
be set to values that will get the triangle ramp going.
The previous two paragraphs illustrate an important principle: logic can be a tricky thing.
At times, it appears to defy logic.
But it could never do that.
Because it's logic.
With the ramp generator done, I can combine it with a simple PWM to complete the LED wax-wane demo:
End of explanation
initialize()
clk = Wire(name='clk')
led = Wire(name='led')
wax_wane(clk, led, 6) # Set ramp counter to 6 bits: 0, 1, 2, ..., 61, 62, 63, 62, 61, ..., 2, 1, 0, ...
clk_sim(clk, num_cycles=180)
t = 110 # Look in the middle of the simulation to see if anything is happening.
show_waveforms(tick=True, start_time=t, stop_time=t+40)
Explanation: Now I'll simulate it using a six-bit ramp counter:
End of explanation
toVerilog(wax_wane, clk, led, 23)
Explanation: The simulation doesn't show us much, but there are two items worth mentioning:
1. The triangle ramp increments to its maximum value (0x3F) and then starts back down.
2. The PWM output appears to be outputing its maximum level (it's on for 15 of its 16 cycles).
At this point, I could do more simulation in an attempt to get more information.
But for a non-critical application like this demo, I'll just go build it and see what happens!
First, I'll generate the Verilog from the MyHDL code.
The ramp counter has to be wide enough to ensure I can see the LED wax-wane cycle.
If I set the counter width to 23 bits, it will take $2^{23}$ cycles of the 12 MHz clock to go from zero to its
maximum value. Then it will take the same number of cycles to go back to zero.
This translates into $2^{24} \textrm{ / 12,000,000 Hz} = $ 1.4 seconds.
That seems good.
End of explanation
with open('wax_wane.pcf', 'w') as pcf:
pcf.write(
'''
set_io clk_i 21
set_io led_o 99
'''
)
Explanation: Next, I have to assign some pins for the clock input and the LED output (LED D1):
End of explanation
!yosys -q -p "synth_ice40 -blif wax_wane.blif" wax_wane.v
!arachne-pnr -q -d 1k -p wax_wane.pcf wax_wane.blif -o wax_wane.asc
!icepack wax_wane.asc wax_wane.bin
!iceprog wax_wane.bin
Explanation: Finally, I can compile the Verilog code and pin assignments into a bitstream and download it
into the iCEstick:
End of explanation |
11,821 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple example of using Facebook's Prophet
We will use public road traffic data to train the model and forecast future road traffic.
Main intention is to show how simple Prophet usage is, forecast itself is very secondary.
First set up the requirements
Step1: Sample road traffic data in CSV format can be downloaded here (for the sake of this example, dataset is included in the repository).
Load the data parsing dates as required. We also rename columns to ds and y in accordance with the convention used by Prophet.
Step2: Split data into two sets - one for training and one for testing our model
Step3: All that is required to do the forecast
Step4: We can also query our model to show us trend, weekly and yearly components of the forecast | Python Code:
from fbprophet import Prophet
import pandas as pd
%matplotlib notebook
import matplotlib
Explanation: Simple example of using Facebook's Prophet
We will use public road traffic data to train the model and forecast future road traffic.
Main intention is to show how simple Prophet usage is, forecast itself is very secondary.
First set up the requirements:
End of explanation
date_parse = lambda date: pd.datetime.strptime(date, '%Y-%m-%d')
time_series = pd.read_csv("solarhringsumferd-a-talningarsto.csv", header=0, names=['ds', 'y'], usecols=[0, 1],
parse_dates=[0], date_parser=date_parse)
Explanation: Sample road traffic data in CSV format can be downloaded here (for the sake of this example, dataset is included in the repository).
Load the data parsing dates as required. We also rename columns to ds and y in accordance with the convention used by Prophet.
End of explanation
training = time_series[time_series['ds'] < '2009-01-01']
testing = time_series[time_series['ds'] > '2009-01-01']
testing.columns = ['ds', 'ytest']
training.plot(x='ds');
Explanation: Split data into two sets - one for training and one for testing our model:
End of explanation
# Train the model.
model = Prophet()
model.fit(training)
# Define period to make forecast for.
future = model.make_future_dataframe(periods=365*2)
# Perform prediction for the defined period.
predicted_full = model.predict(future)
# We only plot date and predicted value.
# Full prediction contains much more data, like confidence intervals, for example.
predicted = predicted_full[['ds', 'yhat']]
# Plot training, testing and predicted values together.
combined = training.merge(testing, on='ds', how='outer')
combined = combined.merge(predicted, on='ds', how='outer')
combined.columns = ['Date', 'Training', 'Testing', 'Predicted']
# Only show the "intersting part" - no point in looking at the past.
combined[combined['Date'] > '2008-01-01'].plot(x='Date');
Explanation: All that is required to do the forecast:
End of explanation
model.plot_components(predicted_full);
Explanation: We can also query our model to show us trend, weekly and yearly components of the forecast:
End of explanation |
11,822 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder on MNIST dataset
Learning Objective
1. Build an autoencoder architecture (consisting of an encoder and decoder) in Keras
2. Define the loss using the reconstructive error
3. Define a training step for the autoencoder using tf.GradientTape()
4. Train the autoencoder on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a convolutional autoencoder.
Autoencoders consist of two models
Step1: Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the EMBED_DIM to be 64. This is the dimension of the latent space for our autoencoder.
Step2: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the autoencoder. The encoder will map the handwritten digits into the latent space, to force a lower dimensional representation and the decoder will then map the encoding back.
Step3: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
Step4: Create the encoder and decoder models
Both our encoder and decoder models will be defined using the Keras Sequential API.
The Encoder
The encoder uses tf.keras.layers.Conv2D layers to map the image into a lower-dimensional latent space. We will start with an image of size 28x28x1 and then use convolution layers to map into a final Dense layer.
Exercise. Complete the code below to create the CNN-based encoder model. Your model should have input_shape to be 28x28x1 and end with a final Dense layer the size of embed_dim.
Step5: The Decoder
The decoder uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from the latent space. We will start with a Dense layer with the same input shape as embed_dim, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the decoder model. Start with a Dense layer that takes as input a tensor of size embed_dim. Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample so that the final layer has shape 28x28x1 (the shape of our original MNIST digits).
Hint
Step6: Finally, we stitch the encoder and decoder models together to create our autoencoder.
Step7: Using .summary() we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder.
Step8: Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder.
Step9: Optimizer for the autoencoder
Next we define the optimizer for model, specifying the learning rate.
Step10: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
Step11: Define the training loop
Next, we define the training loop for training our autoencoder. The train step will use tf.GradientTape() to keep track of gradient steps through training.
Exercise.
Complete the code below to define the training loop for our autoencoder. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images and passes them through the ae_model. The gradient is then computed on the loss against the ae_model output and the original image. In the code below, you should
- define ae_gradients. This is the gradient of the autoencoder loss with respect to the variables of the ae_model.
- create the gradient_variables by assigning each ae_gradient computed above to it's respective training variable.
- apply the gradient step using the optimizer
Step12: We use the train_step function above to define training of our autoencoder. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
Step13: Generate and save images.
We'll use a small helper function to generate images and save them.
Step14: Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise.
Step15: Train the model
Call the train() method defined above to train the autoencoder model.
We'll print the resulting images as training progresses. At the beginning of the training, the decoded images look like random noise. As training progresses, the model outputs will look increasingly better. After about 50 epochs, they resemble MNIST digits. This may take about one or two minutes / epoch
Step16: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training. | Python Code:
from __future__ import absolute_import, division, print_function
import glob
import imageio
import os
import PIL
import time
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras import layers
from IPython import display
Explanation: Convolutional Autoencoder on MNIST dataset
Learning Objective
1. Build an autoencoder architecture (consisting of an encoder and decoder) in Keras
2. Define the loss using the reconstructive error
3. Define a training step for the autoencoder using tf.GradientTape()
4. Train the autoencoder on the MNIST dataset
Introduction
This notebook demonstrates how to build and train a convolutional autoencoder.
Autoencoders consist of two models: an encoder and a decoder.
<img src="../assets/autoencoder2.png" width="600">
In this notebook we'll build an autoencoder to recreate MNIST digits. This notebook demonstrates this process on the MNIST dataset. The following animation shows a series of images produced by the generator as it was trained for 100 epochs. The images increasingly resemble hand written digits as the autoencoder learns to reconstruct the original images.
<img src="../assets/autoencoder.gif">
Import TensorFlow and other libraries
End of explanation
np.random.seed(1)
tf.random.set_seed(1)
BATCH_SIZE = 128
BUFFER_SIZE = 60000
EPOCHS = 60
LR = 1e-2
EMBED_DIM = 64 # intermediate_dim
Explanation: Next, we'll define some of the environment variables we'll use in this notebook. Note that we are setting the EMBED_DIM to be 64. This is the dimension of the latent space for our autoencoder.
End of explanation
(train_images, _), (test_images, _) = tf.keras.datasets.mnist.load_data()
train_images = train_images.reshape(train_images.shape[0], 28, 28, 1).astype('float32')
train_images = (train_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
Explanation: Load and prepare the dataset
For this notebook, we will use the MNIST dataset to train the autoencoder. The encoder will map the handwritten digits into the latent space, to force a lower dimensional representation and the decoder will then map the encoding back.
End of explanation
# Batch and shuffle the data
train_dataset = tf.data.Dataset.from_tensor_slices(train_images)
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
train_dataset = train_dataset.prefetch(BATCH_SIZE*4)
test_images = test_images.reshape(test_images.shape[0], 28, 28, 1).astype('float32')
test_images = (test_images - 127.5) / 127.5 # Normalize the images to [-1, 1]
Explanation: Next, we define our input pipeline using tf.data. The pipeline below reads in train_images as tensor slices and then shuffles and batches the examples for training.
End of explanation
#TODO 1.
def make_encoder(embed_dim):
model = tf.keras.Sequential(name="encoder")
model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same',
input_shape=[28, 28, 1]))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same'))
model.add(layers.LeakyReLU())
model.add(layers.Dropout(0.3))
model.add(layers.Flatten())
model.add(layers.Dense(embed_dim))
assert model.output_shape == (None, embed_dim)
return model
Explanation: Create the encoder and decoder models
Both our encoder and decoder models will be defined using the Keras Sequential API.
The Encoder
The encoder uses tf.keras.layers.Conv2D layers to map the image into a lower-dimensional latent space. We will start with an image of size 28x28x1 and then use convolution layers to map into a final Dense layer.
Exercise. Complete the code below to create the CNN-based encoder model. Your model should have input_shape to be 28x28x1 and end with a final Dense layer the size of embed_dim.
End of explanation
#TODO 1.
def make_decoder(embed_dim):
model = tf.keras.Sequential(name="decoder")
model.add(layers.Dense(embed_dim, use_bias=False,
input_shape=(embed_dim,)))
model.add(layers.Dense(6272, use_bias=False,
input_shape=(embed_dim,)))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Reshape((7, 7, 128)))
model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1),
padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(64, (5, 5), strides=(2, 2),
padding='same', use_bias=False))
model.add(layers.BatchNormalization())
model.add(layers.LeakyReLU())
model.add(layers.Conv2DTranspose(1, (5, 5), strides=(2, 2), padding='same',
use_bias=False, activation='tanh'))
assert model.output_shape == (None, 28, 28, 1)
return model
Explanation: The Decoder
The decoder uses tf.keras.layers.Conv2DTranspose (upsampling) layers to produce an image from the latent space. We will start with a Dense layer with the same input shape as embed_dim, then upsample several times until you reach the desired image size of 28x28x1.
Exercise. Complete the code below to create the decoder model. Start with a Dense layer that takes as input a tensor of size embed_dim. Use tf.keras.layers.Conv2DTranspose over multiple layers to upsample so that the final layer has shape 28x28x1 (the shape of our original MNIST digits).
Hint: Experiment with using BatchNormalization or different activation functions like LeakyReLU.
End of explanation
ae_model = tf.keras.models.Sequential([make_encoder(EMBED_DIM), make_decoder(EMBED_DIM)])
Explanation: Finally, we stitch the encoder and decoder models together to create our autoencoder.
End of explanation
ae_model.summary()
make_encoder(EMBED_DIM).summary()
make_decoder(EMBED_DIM).summary()
Explanation: Using .summary() we can have a high-level summary of the full autoencoder model as well as the individual encoder and decoder. Note how the shapes of the tensors mirror each other as data is passed through the encoder and then the decoder.
End of explanation
#TODO 2.
def loss(model, original):
reconstruction_error = tf.reduce_mean(
tf.square(tf.subtract(model(original), original)))
return reconstruction_error
Explanation: Next, we define the loss for our autoencoder model. The loss we will use is the reconstruction error. This loss is similar to the MSE loss we've commonly use for regression. Here we are applying this error pixel-wise to compare the original MNIST image and the image reconstructed from the decoder.
End of explanation
optimizer = tf.keras.optimizers.SGD(lr=LR)
Explanation: Optimizer for the autoencoder
Next we define the optimizer for model, specifying the learning rate.
End of explanation
checkpoint_dir = "./ae_training_checkpoints"
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=ae_model)
Explanation: Save checkpoints
This notebook also demonstrates how to save and restore models, which can be helpful in case a long running training task is interrupted.
End of explanation
#TODO 3.
@tf.function
def train_step(images):
with tf.GradientTape() as tape:
ae_gradients = tape.gradient(loss(ae_model, images),
ae_model.trainable_variables)
gradient_variables = zip(ae_gradients, ae_model.trainable_variables)
optimizer.apply_gradients(gradient_variables)
Explanation: Define the training loop
Next, we define the training loop for training our autoencoder. The train step will use tf.GradientTape() to keep track of gradient steps through training.
Exercise.
Complete the code below to define the training loop for our autoencoder. Notice the use of tf.function below. This annotation causes the function train_step to be "compiled". The train_step function takes as input a batch of images and passes them through the ae_model. The gradient is then computed on the loss against the ae_model output and the original image. In the code below, you should
- define ae_gradients. This is the gradient of the autoencoder loss with respect to the variables of the ae_model.
- create the gradient_variables by assigning each ae_gradient computed above to it's respective training variable.
- apply the gradient step using the optimizer
End of explanation
def train(dataset, epochs):
for epoch in range(epochs):
start = time.time()
for image_batch in dataset:
train_step(image_batch)
# Produce images for the GIF as we go
display.clear_output(wait=True)
generate_and_save_images(ae_model,
epoch + 1,
test_images[:16, :, :, :])
# Save the model every 5 epochs
if (epoch + 1) % 5 == 0:
checkpoint.save(file_prefix=checkpoint_prefix)
print ('Time for epoch {} is {} sec'.format(
epoch + 1, time.time()-start))
# Generate after the final epoch
display.clear_output(wait=True)
generate_and_save_images(ae_model,
epochs,
test_images[:16, :, :, :])
Explanation: We use the train_step function above to define training of our autoencoder. Note here, the train function takes as argument the tf.data dataset and the number of epochs for training.
End of explanation
def generate_and_save_images(model, epoch, test_input):
# Notice `training` is set to False.
# This is so all layers run in inference mode (batchnorm).
predictions = model(test_input, training=False)
fig = plt.figure(figsize=(4,4))
for i in range(predictions.shape[0]):
plt.subplot(4, 4, i+1)
pixels = predictions[i, :, :] * 127.5 + 127.5
pixels = np.array(pixels, dtype='float')
pixels = pixels.reshape((28,28))
plt.imshow(pixels, cmap='gray')
plt.axis('off')
plt.savefig('image_at_epoch_{:04d}.png'.format(epoch))
plt.show()
Explanation: Generate and save images.
We'll use a small helper function to generate images and save them.
End of explanation
generate_and_save_images(ae_model, 4, test_images[:16, :, :, :])
Explanation: Let's see how our model performs before any training. We'll take as input the first 16 digits of the MNIST test set. Right now they just look like random noise.
End of explanation
#TODO 4.
train(train_dataset, EPOCHS)
Explanation: Train the model
Call the train() method defined above to train the autoencoder model.
We'll print the resulting images as training progresses. At the beginning of the training, the decoded images look like random noise. As training progresses, the model outputs will look increasingly better. After about 50 epochs, they resemble MNIST digits. This may take about one or two minutes / epoch
End of explanation
# Display a single image using the epoch number
def display_image(epoch_no):
return PIL.Image.open('./ae_images/image_at_epoch_{:04d}.png'.format(epoch_no))
display_image(EPOCHS)
anim_file = 'autoencoder.gif'
with imageio.get_writer(anim_file, mode='I') as writer:
filenames = glob.glob('./ae_images/image*.png')
filenames = sorted(filenames)
last = -1
for i,filename in enumerate(filenames):
frame = 2*(i**0.5)
if round(frame) > round(last):
last = frame
else:
continue
image = imageio.imread(filename)
writer.append_data(image)
image = imageio.imread(filename)
writer.append_data(image)
import IPython
if IPython.version_info > (6,2,0,''):
display.Image(filename=anim_file)
Explanation: Create a GIF
Lastly, we'll create a gif that shows the progression of our produced images through training.
End of explanation |
11,823 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Summary
Do I sleep less on nights when I play Ultimate frisbee?
Step1: Lots of variation here. Can I relate this to Ultimate frisbee? Spring hat league started on April 14 and played once a week on Fridays until May 19. Meanwhile, I started playing summer club league on May 2. We play twice a week on Tuesdays and Thursdays with our last game on August 17.
Step2: Monday = 0, Sunday = 6. Saturday, Sunday, and Wednesday stand out as days where I have fewer Very Active minutes, but there is no obvious evidence that Tuesday and Thursday are days where I am running around chasing plastic for 1-2 hours. I suspect that part of the challenge here is that I ride my bike to work every day. It's 15 - 20 minutes each way, so if that time on the bike goes in to the "Very Active" bin according to Fitbit, then it will be mixed in with ultimate frisbee minutes. I might be able to filter out bike rides by looking at the start time of each activity. However, I will need to go back to the Fitbit API to extract that information. | Python Code:
import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
import nba_py
sns.set_context('poster')
import plotly.offline as py
import plotly.graph_objs as go
py.init_notebook_mode(connected=True)
data_path = os.path.join(os.getcwd(), os.pardir, 'data', 'interim', 'sleep_data.csv')
df_sleep = pd.read_csv(data_path, index_col='shifted_datetime', parse_dates=True)
df_sleep.index += pd.Timedelta(hours=12)
sleep_day = df_sleep.resample('1D').sum().fillna(0)
data_path = os.path.join(os.getcwd(), os.pardir, 'data', 'interim', 'activity_data.csv')
df_activity = pd.read_csv(data_path, index_col='datetime', parse_dates=True)
df_activity.columns
toplot = df_activity['minutesVeryActive']
data = []
data.append(
go.Scatter(
x=toplot.index,
y=toplot.values,
name='Minutes Very Active'
)
)
layout = go.Layout(
title="Daily Very Active Minutes",
yaxis=dict(
title='Minutes'
),
)
fig = {
'data': data,
'layout': layout,
}
py.iplot(fig, filename='DailyVeryActiveMinutes')
Explanation: Summary
Do I sleep less on nights when I play Ultimate frisbee?
End of explanation
dayofweek = df_activity.index.dayofweek
index_summerleague = df_activity.index >= '2017-05-02'
df_activity_summer = df_activity[index_summerleague]
summer_dayofweek = df_activity_summer.index.dayofweek
df_activity_summer['dayofweek'] = summer_dayofweek
df_activity_summer.groupby('dayofweek').mean()
Explanation: Lots of variation here. Can I relate this to Ultimate frisbee? Spring hat league started on April 14 and played once a week on Fridays until May 19. Meanwhile, I started playing summer club league on May 2. We play twice a week on Tuesdays and Thursdays with our last game on August 17.
End of explanation
df_activity_summer.groupby('dayofweek').std()
Explanation: Monday = 0, Sunday = 6. Saturday, Sunday, and Wednesday stand out as days where I have fewer Very Active minutes, but there is no obvious evidence that Tuesday and Thursday are days where I am running around chasing plastic for 1-2 hours. I suspect that part of the challenge here is that I ride my bike to work every day. It's 15 - 20 minutes each way, so if that time on the bike goes in to the "Very Active" bin according to Fitbit, then it will be mixed in with ultimate frisbee minutes. I might be able to filter out bike rides by looking at the start time of each activity. However, I will need to go back to the Fitbit API to extract that information.
End of explanation |
11,824 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Creating my own version of the dogs_cats_redux notebook in order to make my own entry into the Kaggle competition.
My dir structure is similar, but not exactly the same
Step1: Note
Step2: Create validation set and sample
ONLY DO THIS ONCE.
Step3: Rearrange image files into their respective directories
ONLY DO THIS ONCE.
Step4: Finetuning and Training
OKAY, ITERATE HERE
Step5: if you are training, stay here. if you are loading & creating submission skip down from here.
Step6: ```
Results of ft1.h5
0 val_loss
Step7: Validate Predictions
Step8: Generate Predictions
Step9: Submit Predictions to Kaggle! | Python Code:
#Verify we are in the lesson1 directory
%pwd
%matplotlib inline
import os, sys
sys.path.insert(1, os.path.join(sys.path[0], '../utils'))
from utils import *
from vgg16 import Vgg16
from PIL import Image
from keras.preprocessing import image
from sklearn.metrics import confusion_matrix
Explanation: Creating my own version of the dogs_cats_redux notebook in order to make my own entry into the Kaggle competition.
My dir structure is similar, but not exactly the same:
utils
dogscats (not lesson1)
data
(no extra redux subdir)
train
test
End of explanation
current_dir = os.getcwd()
LESSON_HOME_DIR = current_dir
DATA_HOME_DIR = current_dir+'/data'
Explanation: Note: had to comment out vgg16bn in utils.py (whatever that is)
End of explanation
from shutil import copyfile
#Create directories
%cd $DATA_HOME_DIR
# did this once
#%mkdir valid
#%mkdir results
#%mkdir -p sample/train
#%mkdir -p sample/test
#%mkdir -p sample/valid
#%mkdir -p sample/results
#%mkdir -p test/unknown
%cd $DATA_HOME_DIR/train
# create validation set by renaming 2000
g = glob('*.jpg')
shuf = np.random.permutation(g)
NUM_IMAGES=len(g)
NUM_VALID = 2000 # 0.1*NUM_IMAGES
NUM_TRAIN = NUM_IMAGES-NUM_VALID
print("total=%d train=%d valid=%d"%(NUM_IMAGES,NUM_TRAIN,NUM_VALID))
for i in range(NUM_VALID):
os.rename(shuf[i], DATA_HOME_DIR+'/valid/' + shuf[i])
# copy a small sample
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(200): copyfile(shuf[i], DATA_HOME_DIR+'/sample/train/' + shuf[i])
%cd $DATA_HOME_DIR/valid
g = glob('*.jpg')
shuf = np.random.permutation(g)
for i in range(50): copyfile(shuf[i], DATA_HOME_DIR+'/sample/valid/' + shuf[i])
!ls {DATA_HOME_DIR}/train/ |wc -l
!ls {DATA_HOME_DIR}/valid/ |wc -l
!ls {DATA_HOME_DIR}/sample/train/ |wc -l
!ls {DATA_HOME_DIR}/sample/valid/ |wc -l
Explanation: Create validation set and sample
ONLY DO THIS ONCE.
End of explanation
#Divide cat/dog images into separate directories
%cd $DATA_HOME_DIR/sample/train
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
%cd $DATA_HOME_DIR/sample/valid
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
%cd $DATA_HOME_DIR/valid
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
%cd $DATA_HOME_DIR/train
%mkdir cats
%mkdir dogs
%mv cat.*.jpg cats/
%mv dog.*.jpg dogs/
# Create single 'unknown' class for test set
%cd $DATA_HOME_DIR/test
%mv *.jpg unknown/
!ls {DATA_HOME_DIR}/test
Explanation: Rearrange image files into their respective directories
ONLY DO THIS ONCE.
End of explanation
%cd $DATA_HOME_DIR
#Set path to sample/ path if desired
path = DATA_HOME_DIR + '/' #'/sample/'
test_path = DATA_HOME_DIR + '/test/' #We use all the test data
results_path=DATA_HOME_DIR + '/results/'
train_path=path + '/train/'
valid_path=path + '/valid/'
vgg = Vgg16()
#Set constants. You can experiment with no_of_epochs to improve the model
batch_size=64
no_of_epochs=2
#Finetune the model
batches = vgg.get_batches(train_path, batch_size=batch_size)
val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2)
vgg.finetune(batches)
#Not sure if we set this for all fits
vgg.model.optimizer.lr = 0.01
#Notice we are passing in the validation dataset to the fit() method
#For each epoch we test our model against the validation set
latest_weights_filename = None
#vgg.model.load_weights('/home/rallen/Documents/PracticalDL4C/courses/deeplearning1/nbs/data/dogscats/models/first.h5')
#vgg.model.load_weights(results_path+'ft1.h5')
latest_weights_filename='ft24.h5'
vgg.model.load_weights(results_path+latest_weights_filename)
Explanation: Finetuning and Training
OKAY, ITERATE HERE
End of explanation
# if you have run some epochs already...
epoch_offset=12 # trying again from ft1
for epoch in range(no_of_epochs):
print "Running epoch: %d" % (epoch + epoch_offset)
vgg.fit(batches, val_batches, nb_epoch=1)
latest_weights_filename = 'ft%d.h5' % (epoch + epoch_offset)
vgg.model.save_weights(results_path+latest_weights_filename)
print "Completed %s fit operations" % no_of_epochs
Explanation: if you are training, stay here. if you are loading & creating submission skip down from here.
End of explanation
# only if you have to
latest_weights_filename='ft1.h5'
vgg.model.load_weights(results_path+latest_weights_filename)
Explanation: ```
Results of ft1.h5
0 val_loss: 0.2122 val_acc: 0.9830
1 val_loss: 0.1841 val_acc: 0.9855
[[987 7]
[ 20 986]]
--
2 val_loss: 0.2659 val_acc: 0.9830
3 val_loss: 0.2254 val_acc: 0.9850
4 val_loss: 0.2072 val_acc: 0.9845
[[975 19]
[ 11 995]]
Results of first0.h5
0 val_loss: 0.2425 val_acc: 0.9830
[[987 7]
[ 27 979]]
```
End of explanation
val_batches, probs = vgg.test(valid_path, batch_size = batch_size)
filenames = val_batches.filenames
expected_labels = val_batches.classes #0 or 1
#Round our predictions to 0/1 to generate labels
our_predictions = probs[:,0]
our_labels = np.round(1-our_predictions)
cm = confusion_matrix(expected_labels, our_labels)
plot_confusion_matrix(cm, val_batches.class_indices)
#Helper function to plot images by index in the validation set
#Plots is a helper function in utils.py
def plots_idx(idx, titles=None):
plots([image.load_img(valid_path + filenames[i]) for i in idx], titles=titles)
#Number of images to view for each visualization task
n_view = 4
#1. A few correct labels at random
correct = np.where(our_labels==expected_labels)[0]
print "Found %d correct labels" % len(correct)
idx = permutation(correct)[:n_view]
plots_idx(idx, our_predictions[idx])
#2. A few incorrect labels at random
incorrect = np.where(our_labels!=expected_labels)[0]
print "Found %d incorrect labels" % len(incorrect)
idx = permutation(incorrect)[:n_view]
plots_idx(idx, our_predictions[idx])
#3a. The images we most confident were cats, and are actually cats
correct_cats = np.where((our_labels==0) & (our_labels==expected_labels))[0]
print "Found %d confident correct cats labels" % len(correct_cats)
most_correct_cats = np.argsort(our_predictions[correct_cats])[::-1][:n_view]
plots_idx(correct_cats[most_correct_cats], our_predictions[correct_cats][most_correct_cats])
#3b. The images we most confident were dogs, and are actually dogs
correct_dogs = np.where((our_labels==1) & (our_labels==expected_labels))[0]
print "Found %d confident correct dogs labels" % len(correct_dogs)
most_correct_dogs = np.argsort(our_predictions[correct_dogs])[:n_view]
plots_idx(correct_dogs[most_correct_dogs], our_predictions[correct_dogs][most_correct_dogs])
#4a. The images we were most confident were cats, but are actually dogs
incorrect_cats = np.where((our_labels==0) & (our_labels!=expected_labels))[0]
print "Found %d incorrect cats" % len(incorrect_cats)
if len(incorrect_cats):
most_incorrect_cats = np.argsort(our_predictions[incorrect_cats])[::-1][:n_view]
plots_idx(incorrect_cats[most_incorrect_cats], our_predictions[incorrect_cats][most_incorrect_cats])
#4b. The images we were most confident were dogs, but are actually cats
incorrect_dogs = np.where((our_labels==1) & (our_labels!=expected_labels))[0]
print "Found %d incorrect dogs" % len(incorrect_dogs)
if len(incorrect_dogs):
most_incorrect_dogs = np.argsort(our_predictions[incorrect_dogs])[:n_view]
plots_idx(incorrect_dogs[most_incorrect_dogs], our_predictions[incorrect_dogs][most_incorrect_dogs])
#5. The most uncertain labels (ie those with probability closest to 0.5).
most_uncertain = np.argsort(np.abs(our_predictions-0.5))
plots_idx(most_uncertain[:n_view], our_predictions[most_uncertain])
Explanation: Validate Predictions
End of explanation
batches, preds = vgg.test(test_path, batch_size = batch_size*2)
# Error allocating 3347316736 bytes of device memory (out of memory).
# got this error when batch-size = 128
# I see this pop up to 6GB memory with batch_size = 64 & this takes some time...
#For every image, vgg.test() generates two probabilities
#based on how we've ordered the cats/dogs directories.
#It looks like column one is cats and column two is dogs
print preds[:5]
filenames = batches.filenames
print filenames[:5]
#You can verify the column ordering by viewing some images
Image.open(test_path + filenames[1])
#Save our test results arrays so we can use them again later
save_array(results_path + 'test_preds.dat', preds)
save_array(results_path + 'filenames.dat', filenames)
Explanation: Generate Predictions
End of explanation
#Load our test predictions from file
preds = load_array(results_path + 'test_preds.dat')
filenames = load_array(results_path + 'filenames.dat')
#Grab the dog prediction column
isdog = preds[:,1]
print "Raw Predictions: " + str(isdog[:5])
print "Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)])
print "Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)])
#play it safe, round down our edge predictions
#isdog = isdog.clip(min=0.05, max=0.95)
#isdog = isdog.clip(min=0.02, max=0.98)
isdog = isdog.clip(min=0.01, max=0.99)
#Extract imageIds from the filenames in our test/unknown directory
filenames = batches.filenames
ids = np.array([int(f[8:f.find('.')]) for f in filenames])
subm = np.stack([ids,isdog], axis=1)
subm[:5]
%cd $DATA_HOME_DIR
submission_file_name = 'submission4.csv'
np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='')
from IPython.display import FileLink
%cd $LESSON_HOME_DIR
FileLink('data/'+submission_file_name)
Explanation: Submit Predictions to Kaggle!
End of explanation |
11,825 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
3D Plots, in Python
We first load in the toolboxes for numerical python and plotting.
Step1: Plotting a simple surface.
Let's plot the hyperbola $z = x^2 - y^2$.
The 3D plotting commands expect arrays as entries, so we create a mesh grid from linear variables $x$ and $y$, resulting in arrays $X$ and $Y$. We then compute $Z$ as a grid (array).
Step2: I don't know how to simply plot in matplotlib.
Instead, we have three steps
- create a figure
- indicated that the figure will be in 3D
- then send the plot_surface command to the figure asix.
Step3: Wireframe plots
Use the wireframe command. Note we can adjust the separation between lines, using the stride paameters.
Step4: Subplots
To make two plots, side-by-side, you make one figure and add two subplots. (I'm reusing the object label "ax" in the code here.)
Step5: Parameterized surfaces
A parameterized surfaces expresses the spatial variable $x,y,z$ as a function of two independent parameters, say $u$ and $v$.
Here we plot a sphere. Use the usual spherical coordinates.
$$x = \cos(u)\sin(v) $$
$$y = \sin(u)\sin(v) $$
$$z = \cos(v) $$
with appropriate ranges for $u$ and $v$. We set up the array variables as follows
Step6: Outer product for speed
Python provides an outer product, which makes it easy to multiply the $u$ vector by the $v$ vectors, to create the 2D array of grid values. This is sometime useful for speed, so you may see it in other's people's code when they really need the speed. Here is an example.
Step7: A donut
Let's plot a torus. The idea is to start with a circle
$$ x_0 = \cos(u) $$
$$ y_0 = \sin(u) $$
$$ z_0 = 0$$
then add a little circle perpendicular to it
$$ (x_0,y_0,0)\cos(v) + (0,0,1)\sin(v) = (\cos(u)\cos(v), \sin(u)\cos(v), \sin(v)).$$
Add them, with a scaling. | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
Explanation: 3D Plots, in Python
We first load in the toolboxes for numerical python and plotting.
End of explanation
# Make data
x = np.linspace(-2, 2, 100)
y = np.linspace(-2, 2, 100)
X, Y = np.meshgrid(x, y)
Z = X**2 - Y**2
Explanation: Plotting a simple surface.
Let's plot the hyperbola $z = x^2 - y^2$.
The 3D plotting commands expect arrays as entries, so we create a mesh grid from linear variables $x$ and $y$, resulting in arrays $X$ and $Y$. We then compute $Z$ as a grid (array).
End of explanation
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, Z, color='b')
Explanation: I don't know how to simply plot in matplotlib.
Instead, we have three steps
- create a figure
- indicated that the figure will be in 3D
- then send the plot_surface command to the figure asix.
End of explanation
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
Explanation: Wireframe plots
Use the wireframe command. Note we can adjust the separation between lines, using the stride paameters.
End of explanation
fig = plt.figure(figsize=plt.figaspect(0.3))
ax = fig.add_subplot(121, projection='3d')
ax.plot_surface(X, Y, Z, color='b')
ax = fig.add_subplot(122, projection='3d')
ax.plot_wireframe(X, Y, Z, rstride=10, cstride=10)
Explanation: Subplots
To make two plots, side-by-side, you make one figure and add two subplots. (I'm reusing the object label "ax" in the code here.)
End of explanation
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
U,V = np.meshgrid(u,v)
X = np.cos(U) * np.sin(V)
Y = np.sin(U) * np.sin(V)
Z = np.cos(V)
fig = plt.figure()
ax = plt.axes(projection='3d')
# Plot the surface
ax.plot_surface(X, Y, Z, color='b')
Explanation: Parameterized surfaces
A parameterized surfaces expresses the spatial variable $x,y,z$ as a function of two independent parameters, say $u$ and $v$.
Here we plot a sphere. Use the usual spherical coordinates.
$$x = \cos(u)\sin(v) $$
$$y = \sin(u)\sin(v) $$
$$z = \cos(v) $$
with appropriate ranges for $u$ and $v$. We set up the array variables as follows:
End of explanation
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, np.pi, 100)
X = np.outer(np.cos(u), np.sin(v))
Y = np.outer(np.sin(u), np.sin(v))
Z = np.outer(np.ones(np.size(u)), np.cos(v))
fig = plt.figure()
ax = plt.axes(projection='3d')
# Plot the surface
ax.plot_surface(X, Y, Z, color='b')
Explanation: Outer product for speed
Python provides an outer product, which makes it easy to multiply the $u$ vector by the $v$ vectors, to create the 2D array of grid values. This is sometime useful for speed, so you may see it in other's people's code when they really need the speed. Here is an example.
End of explanation
# Make data
u = np.linspace(0, 2 * np.pi, 100)
v = np.linspace(0, 2 * np.pi, 100)
U,V = np.meshgrid(u,v)
R = 10
r = 4
X = R * np.cos(U) + r*np.cos(U)*np.cos(V)
Y = R * np.sin(U) + r*np.sin(U)*np.cos(V)
Z = r * np.sin(V)
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.set_xlim([-(R+r), (R+r)])
ax.set_ylim([-(R+r), (R+r)])
ax.set_zlim([-(R+r), (R+r)])
ax.plot_surface(X, Y, Z, color='c')
Explanation: A donut
Let's plot a torus. The idea is to start with a circle
$$ x_0 = \cos(u) $$
$$ y_0 = \sin(u) $$
$$ z_0 = 0$$
then add a little circle perpendicular to it
$$ (x_0,y_0,0)\cos(v) + (0,0,1)\sin(v) = (\cos(u)\cos(v), \sin(u)\cos(v), \sin(v)).$$
Add them, with a scaling.
End of explanation |
11,826 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:28
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation |
11,827 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
SYDE 556/750
Step1: Some sort of mapping between neural activity and a state in the world
my location
head tilt
image
remembered location
Intuitively, we call this "representation"
In neuroscience, people talk about the 'neural code'
To formalize this notion, the NEF uses information theory (or coding theory)
Critical obvious difference between this and ANNs
Step2: Rectified Linear Neuron
Step3: Leaky integrate-and-fire neuron
$ a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$
Step4: Response functions
These are called "response functions"
How much neural firing changes with change in current
Similar for many classes of cells (e.g. pyramidal cells - most of cortex)
This is the $G_i$ function in the NEF
Step5: For mapping #1, the NEF uses a linear map
Step6: But that's not how people normally plot it
It might not make sense to sample every possible x
Instead they might do some subset
For example, what if we just plot the points around the unit circle?
Step7: That starts looking a lot more like the real data.
Notation
Encoding
$a_i = G_i[\alpha_i x \cdot e_i + J^{bias}_i]$
Decoding
$\hat{x} = \sum_i a_i d_i$
The textbook uses $\phi$ for $d$ and $\tilde \phi$ for $e$
We're switching to $d$ (for decoder) and $e$ (for encoder)
Decoder
But where do we get $d_i$ from?
$\hat{x}=\sum a_i d_i$
Find the optimal $d_i$
How?
Math
Solving for $d$
Minimize the average error over all $x$, i.e.,
$ E = \frac{1}{2}\int_{-1}^1 (x-\hat{x})^2 \; dx $
Substitute for $\hat{x}$
Step8: What happens to the error with more or fewer neurons?
Noise
Neurons aren't perfect
Axonal jitter
Neurotransmitter vesicle release failure (~80%)
Amount of neurotransmitter per vesicle
Thermal noise
Ion channel noise (# of channels open and closed)
Network effects
More information
Step9: What if we just increase the number of neurons? Will it help?
Taking noise into account
Include noise while solving for decoders
Introduce noise term $\eta$
$
\begin{align}
\hat{x} &= \sum_i(a_i+\eta)d_i \
E &= {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i(a_i+\eta)d_i\right)^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i - \sum \eta d_i \right)^2 \;dx d\eta
\end{align}
$
- Assume noise is gaussian, independent, mean zero, and has the same variance for each neuron
- $\eta = \mathcal{N}(0, \sigma)$
- All the noise cross-terms disappear (independent)
$
\begin{align}
E &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i,j} d_i d_j <\eta_i \eta_j>\eta \
&= {1 \over 2} \int{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i} d_i d_i <\eta_i \eta_i>_\eta
\end{align}
$
Since the average of $\eta_i \eta_i$ noise is its variance (since the mean is zero), $\sigma^2$, we get
$
\begin{align}
E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2
\end{align}
$
The practical result is that, when computing the decoder, we get
$
\begin{align}
\Gamma_{ij} = \sum_x a_i a_j / S + \sigma^2 \delta_{ij}
\end{align}
$
Where $\delta_{ij}$ is the Kronecker delta
Step10: Number of neurons
What happens to the error with more neurons?
Note that the error has two parts
Step11: How good is the representation?
Step12: Possible questions
How many neurons do we need for a particular level of accuracy?
What happens with different firing rates?
What happens with different distributions of x-intercepts?
Example 2 | Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('KE952yueVLA', width=720, height=400, loop=1, autoplay=0)
from IPython.display import YouTubeVideo
YouTubeVideo('lfNVv0A8QvI', width=720, height=400, loop=1, autoplay=0)
Explanation: SYDE 556/750: Simulating Neurobiological Systems
Accompanying Readings: Chapter 2
NEF Principle 1 - Representation
Activity of neurons change over time
<img src="files/lecture2/spikes.jpg" width="800">
This probably means something
Sometimes it seems pretty clear what it means
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('hxdPdKbqm_I', width=720, height=400, loop=1, autoplay=0)
Explanation: Some sort of mapping between neural activity and a state in the world
my location
head tilt
image
remembered location
Intuitively, we call this "representation"
In neuroscience, people talk about the 'neural code'
To formalize this notion, the NEF uses information theory (or coding theory)
Critical obvious difference between this and ANNs:
Time is an intrinsic part of the representation
We start by ignoring this, but it is a theoretica debt we have to repay
Representation formalism
Value being represented: $x$
Neural activity: $a$
Neuron index: $i$
Encoding and decoding
Have to define both to define a code
Lossless code (e.g. Morse Code):
encoding: $a = f(x)$
decodng: $x = f^{-1}(a)$
Lossy code:
encoding: $a = f(x)$
decoding: $\hat{x} = g(a) \approx x$
Distributed representation
Not just one neuron per $x$ value (or per $x$)
Many different $a$ values for a single $x$
Encoding: $a_i = f_i(x)$
Decoding: $\hat{x} = g(a_0, a_1, a_2, a_3, ...)$
Example: binary representation
Encoding (nonlinear):
$$
a_i = \begin{cases}
1 &\mbox{if } x \mod {2^{i}} > 2^{i-1} \
0 &\mbox{otherwise}
\end{cases}
$$
Decoding (linear):
$$
\hat{x} = \sum_i a_i 2^{i-1}
$$
Suppose: $x = 13$
Encoding:
$a_1 = 1$, $a_2 = 0$, $a_3 = 1$, $a_4 = 1$
Decoding:
$\hat{x} = 11+02+14+18 = 13$
Linear decoding
Write decoder as $\hat{x} = \sum_ia_i d_i$
Linear decoding is nice and simple
Works fine with non-linear encoding (!)
The NEF uses linear decoding, but what about the encoding?
Neuron encoding
$a_i = f_i(x)$
What do we know about neurons?
<img src="files/lecture1/NeuronStructure.jpg">
Firing rate goes up as total input current goes up
$a_i = G_i(J)$
What is $G_i$?
depends on how detailed a neuron model we want.
End of explanation
# Rectified linear neuron
%pylab inline
import numpy
import nengo
n = nengo.neurons.RectifiedLinear()
J = numpy.linspace(-1,10,100)
plot(J, n.rates(J, gain=30, bias=-25))
xlabel('J (current)')
ylabel('$a$ (Hz)');
Explanation: Rectified Linear Neuron
End of explanation
#assume %pylab inline has been run
# Leaky integrate and fire
import numpy
import nengo
n = nengo.neurons.LIFRate(tau_rc=0.02, tau_ref=0.002) #n is a Nengo LIF neuron, these are defaults
J = numpy.linspace(-1,10,100)
plot(J, n.rates(J, gain=1, bias=-2))
xlabel('J (current)')
ylabel('$a$ (Hz)');
Explanation: Leaky integrate-and-fire neuron
$ a = {1 \over {\tau_{ref}-\tau_{RC}ln(1-{1 \over J})}}$
End of explanation
#assume this has been run
#%pylab inline
import numpy
import nengo
n = nengo.neurons.LIFRate() #n is a Nengo LIF neuron
x = numpy.linspace(-100,0,100)
plot(x, n.rates(x, gain=1, bias=50), 'b') # x*1+50
plot(x, n.rates(x, gain=0.1, bias=10), 'r') # x*0.1+10
plot(x, n.rates(x, gain=0.5, bias=5), 'g') # x*0.05+5
plot(x, n.rates(x, gain=0.1, bias=4), 'c') #x*0.1+4))
xlabel('x')
ylabel('a');
Explanation: Response functions
These are called "response functions"
How much neural firing changes with change in current
Similar for many classes of cells (e.g. pyramidal cells - most of cortex)
This is the $G_i$ function in the NEF: it can be pretty much anything
Tuning Curves
Neurons seem to be sensitive to particular values of $x$
How are neurons 'tuned' to a representation? or...
What's the mapping between $x$ and $a$?
Recall 'place cells', and 'edge detectors'
Sometimes they are fairly straight forward:
<img src="files/lecture2/tuning_curve_auditory.gif">
But not often:
<img src="files/lecture2/tuning_curve.jpg">
<img src="files/lecture2/orientation_tuning.png">
Is there a general form?
Tuning curves (cont.)
The NEF suggests that there is...
Something generic and simple
That covers all the above cases (and more)
Let's start with the simpler case:
<img src="files/lecture2/tuning_curve_auditory.gif">
Note that the experimenters are graphing $a$, as a function of $x$
$x$ is much easier to measure than $J$
So, there are two mappings of interest:
$x$->$J$
$J$->$a$ (response function)
Together these give the tuning curve
$x$ is the volume of the sound in this case
Any ideas?
End of explanation
#assume this has been run
#%pylab inline
import numpy
import nengo
n = nengo.neurons.LIFRate()
e = numpy.array([1.0, 1.0])
e = e/numpy.linalg.norm(e)
a = numpy.linspace(-1,1,50)
b = numpy.linspace(-1,1,50)
X,Y = numpy.meshgrid(a, b)
from mpl_toolkits.mplot3d.axes3d import Axes3D
fig = figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
p = ax.plot_surface(X, Y, n.rates((X*e[0]+Y*e[1]), gain=1, bias=1.5),
linewidth=0, cstride=1, rstride=1, cmap=pylab.cm.jet)
Explanation: For mapping #1, the NEF uses a linear map:
$ J = \alpha x + J^{bias} $
But what about type (c) in this graph?
<img src="files/lecture2/tuning_curve.jpg">
Easy enough:
$ J = - \alpha x + J^{bias} $
But what about type(b)? Or these ones?
<img src="files/lecture2/orientation_tuning.png">
There's usually some $x$ which gives a maximum firing rate
...and thus a maximum $J$
Firing rate (and $J$) decrease as you get farther from the preferred $x$ value
So something like $J = \alpha [sim(x, x_{pref})] + J^{bias}$
What sort of similarity measure?
Let's think about $x$ for a moment
$x$ can be anything... scalar, vector, etc.
Does thinking of it as a vector help?
The Encoding Equation (i.e. Tuning Curves)
Here is the general form we use for everything (it has both 'mappings' in it)
$a_i = G_i[\alpha_i x \cdot e_i + J_i^{bias}] $
$\alpha$ is a gain term (constrained to always be positive)
$J^{bias}$ is a constant bias term
$e$ is the encoder, or the preferred direction vector
$G$ is the neuron model
$i$ indexes the neuron
To simplify life, we always assume $e$ is of unit length
Otherwise we could combine $\alpha$ and $e$
In the 1D case, $e$ is either +1 or -1
In higher dimensions, what happens?
End of explanation
import nengo
import numpy
n = nengo.neurons.LIFRate()
theta = numpy.linspace(0, 2*numpy.pi, 100)
x = numpy.array([numpy.cos(theta), numpy.sin(theta)])
plot(x[0],x[1])
axis('equal')
e = numpy.array([-1.0, 1.0])
e = e/numpy.linalg.norm(e)
plot([0,e[0]], [0,e[1]],'r')
gain = 1
bias = 2.5
figure()
plot(theta, n.rates(numpy.dot(x.T, e), gain=gain, bias=bias))
plot([numpy.arctan2(e[1],e[0])],0,'rv')
xlabel('angle')
ylabel('firing rate')
xlim(0, 2*numpy.pi);
Explanation: But that's not how people normally plot it
It might not make sense to sample every possible x
Instead they might do some subset
For example, what if we just plot the points around the unit circle?
End of explanation
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 10
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates=Uniform(100,200)) #Defaults to LIF neurons,
#with random gains and biases for
#neurons between 100-200hz over -1,1
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqL2(reg=0)) #reg=0 means ignore noise
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
xhat = numpy.dot(A, d)
pyplot.plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
figure()
plot(x, xhat-x)
xlabel('$x$')
ylabel('$\hat{x}-x$')
xlim(-1, 1)
print('RMSE', np.sqrt(np.average((x-xhat)**2)))
Explanation: That starts looking a lot more like the real data.
Notation
Encoding
$a_i = G_i[\alpha_i x \cdot e_i + J^{bias}_i]$
Decoding
$\hat{x} = \sum_i a_i d_i$
The textbook uses $\phi$ for $d$ and $\tilde \phi$ for $e$
We're switching to $d$ (for decoder) and $e$ (for encoder)
Decoder
But where do we get $d_i$ from?
$\hat{x}=\sum a_i d_i$
Find the optimal $d_i$
How?
Math
Solving for $d$
Minimize the average error over all $x$, i.e.,
$ E = \frac{1}{2}\int_{-1}^1 (x-\hat{x})^2 \; dx $
Substitute for $\hat{x}$:
$
\begin{align}
E = \frac{1}{2}\int_{-1}^1 \left(x-\sum_i^N a_i d_i \right)^2 \; dx
\end{align}
$
Take the derivative with respect to $d_i$:
$
\begin{align}
{{\partial E} \over {\partial d_i}} &= {1 \over 2} \int_{-1}^1 2 \left[ x-\sum_j a_j d_j \right] (-a_i) \; dx \
{{\partial E} \over {\partial d_i}} &= - \int_{-1}^1 a_i x \; dx + \int_{-1}^1 \sum_j a_j d_j a_i \; dx
\end{align}
$
At the minimum (i.e. smallest error), $ {{\partial E} \over {\partial d_i}} = 0$
$
\begin{align}
\int_{-1}^1 a_i x \; dx &= \int_{-1}^1 \sum_j(a_j d_j a_i) \; dx \
\int_{-1}^1 a_i x \; dx &= \sum_j \left(\int_{-1}^1 a_i a_j \; dx\right)d_j
\end{align}
$
That's a system of $N$ equations and $N$ unknowns
In fact, we can rewrite this in matrix form
$ \Upsilon = \Gamma d $
where
$
\begin{align}
\Upsilon_i &= {1 \over 2} \int_{-1}^1 a_i x \;dx\
\Gamma_{ij} &= {1 \over 2} \int_{-1}^1 a_i a_j \;dx
\end{align}
$
Do we have to do the integral over all $x$?
Approximate the integral by sampling over $x$
$S$ is the number of $x$ values to use ($S$ for samples)
$
\begin{align}
\sum_x a_i x / S &= \sum_j \left(\sum_x a_i a_j /S \right)d_j \
\Upsilon &= \Gamma d
\end{align}
$
where
$
\begin{align}
\Upsilon_i &= \sum_x a_i x / S \
\Gamma_{ij} &= \sum_x a_i a_j / S
\end{align}
$
Notice that if $A$ is the matrix of activities (the firing rate for each neuron for each $x$ value), then $\Gamma=A^T A / S$ and $\Upsilon=A^T x / S$
So given
$ \Upsilon = \Gamma d $
then
$ d = \Gamma^{-1} \Upsilon $
or, equivalently
$ d_i = \sum_j \Gamma^{-1}_{ij} \Upsilon_j $
End of explanation
#Have to run previous python cell first
A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
pyplot.plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
print('RMSE', np.sqrt(np.average((x-xhat)**2)))
Explanation: What happens to the error with more or fewer neurons?
Noise
Neurons aren't perfect
Axonal jitter
Neurotransmitter vesicle release failure (~80%)
Amount of neurotransmitter per vesicle
Thermal noise
Ion channel noise (# of channels open and closed)
Network effects
More information: http://icwww.epfl.ch/~gerstner/SPNM/node33.html
How do we include this noise as well?
Make the neuron model more complicated
Simple approach: add gaussian random noise to $a_i$
Set noise standard deviation $\sigma$ to 20% of maximum firing rate
Each $a_i$ value for each $x$ value gets a different noise value added to it
What effect does this have on decoding?
End of explanation
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 100
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates=Uniform(100,200)) #Defaults to LIF neurons,
#with random gains and biases for
#neurons between 100-200hz over -1,1
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
A_noisy = A + numpy.random.normal(scale=0.2*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
pyplot.plot(x, A_noisy)
xlabel('x')
ylabel('firing rate (Hz)')
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1)
print('RMSE', np.sqrt(np.average((x-xhat)**2)))
Explanation: What if we just increase the number of neurons? Will it help?
Taking noise into account
Include noise while solving for decoders
Introduce noise term $\eta$
$
\begin{align}
\hat{x} &= \sum_i(a_i+\eta)d_i \
E &= {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i(a_i+\eta)d_i\right)^2 \;dx d\eta\
&= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i - \sum \eta d_i \right)^2 \;dx d\eta
\end{align}
$
- Assume noise is gaussian, independent, mean zero, and has the same variance for each neuron
- $\eta = \mathcal{N}(0, \sigma)$
- All the noise cross-terms disappear (independent)
$
\begin{align}
E &= {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i,j} d_i d_j <\eta_i \eta_j>\eta \
&= {1 \over 2} \int{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sum_{i} d_i d_i <\eta_i \eta_i>_\eta
\end{align}
$
Since the average of $\eta_i \eta_i$ noise is its variance (since the mean is zero), $\sigma^2$, we get
$
\begin{align}
E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2
\end{align}
$
The practical result is that, when computing the decoder, we get
$
\begin{align}
\Gamma_{ij} = \sum_x a_i a_j / S + \sigma^2 \delta_{ij}
\end{align}
$
Where $\delta_{ij}$ is the Kronecker delta: http://en.wikipedia.org/wiki/Kronecker_delta
To simplfy computing this using matrices, this can be written as $\Gamma=A^T A /S + \sigma^2 I$
End of explanation
#%pylab inline
import numpy
import nengo
from nengo.utils.ensemble import tuning_curves
from nengo.dists import Uniform
N = 10
tau_rc = 20
tau_ref = .001
lif_model = nengo.LIFRate(tau_rc=tau_rc, tau_ref=tau_ref)
model = nengo.Network(label='Neurons')
with model:
neurons = nengo.Ensemble(N, dimensions=1,
max_rates = Uniform(250,300),
neuron_type = lif_model)
sim = nengo.Simulator(model)
x, A = tuning_curves(neurons, sim)
plot(x, A)
xlabel('x')
ylabel('firing rate (Hz)');
Explanation: Number of neurons
What happens to the error with more neurons?
Note that the error has two parts:
$
\begin{align}
E = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 \;dx + \sigma^2 \sum_i d_i^2
\end{align}
$
Error due to static distortion (i.e. the error introduced by the decoders themselves)
This is present regardless of noise
$
\begin{align}
E_{distortion} = {1 \over 2} \int_{-1}^1 \left(x-\sum_i a_i d_i \right)^2 dx
\end{align}
$
Error due to noise
$
\begin{align}
E_{noise} = \sigma^2 \sum_i d_i^2
\end{align}
$
What do these look like as number of neurons $N$ increases?
<img src="files/lecture2/repn_noise.png" width="800">
- Noise error is proportional to $1/N$
- Distortion error is proportional to $1/N^2$
- Remember this error $E$ is defined as
$ E = {1 \over 2} \int_{-1}^1 (x-\hat{x})^2 dx $
So that's actually a squared error term
Also, as number of neurons is greater than 100 or so, the error is dominated by the noise term ($1/N$).
Examples
Methodology for building models with the Neural Engineering Framework (outlined in Chapter 1)
System Description: Describe the system of interest in terms of the neural data, architecture, computations, representations, etc. (e.g. response functions, tuning curves, etc.)
Design Specification: Add additional performance constraints (e.g. bandwidth, noise, SNR, dynamic range, stability, etc.)
Implement the model: Employ the NEF principles given the System Description and Design Specification
Example 1: Horizontal Eye Control (1D)
From http://www.nature.com/nrn/journal/v3/n12/full/nrn986.html
<img src="files/lecture2/horizontal_eye.jpg">
There are also neurons whose response goes the other way. All of the neurons are directly connected to the muscle controlling the horizontal direction of the eye, and that's the only thing that muscle does, so we're pretty sure this is what's being repreesnted.
System Description
We've only done the first NEF principle, so that's all we'll worry about
What is being represented?
$x$ is the horizontal position
Tuning curves: extremely linear (high $\tau_{RC}$, low $\tau_{ref}$)
some have $e=1$, some have $e=-1$
these are often called "on" and "off" neurons, respectively
Firing rates of up to 300Hz
Design Specification
Range of values for $x$: -60 degrees to +60 degrees
Normal levels of noise: $\sigma$ is 20% of maximum firing rate
the book goes a bit higher, with $\sigma^2=0.1$, meaning that $\sigma = \sqrt{0.1} \approx 0.32$ times the maximum firing rate
Implementation
Examine the tuning curves
Then use principle 1
End of explanation
#Have to run previous code cell first
noise = 0.2
with model:
connection = nengo.Connection(neurons, neurons, #This is just to generate the decoders
solver=nengo.solvers.LstsqNoise(noise=0.2)) #Add noise ###NEW
sim = nengo.Simulator(model)
d = sim.data[connection].weights.T
x, A = tuning_curves(neurons, sim)
A_noisy = A + numpy.random.normal(scale=noise*numpy.max(A), size=A.shape)
xhat = numpy.dot(A_noisy, d)
print('RMSE with %d neurons is %g'%(N, np.sqrt(np.average((x-xhat)**2))))
figure()
plot(x, x)
plot(x, xhat)
xlabel('$x$')
ylabel('$\hat{x}$')
ylim(-1, 1)
xlim(-1, 1);
Explanation: How good is the representation?
End of explanation
import numpy
import nengo
n = nengo.neurons.LIFRate()
theta = numpy.linspace(-numpy.pi, numpy.pi, 100)
x = numpy.array([numpy.sin(theta), numpy.cos(theta)])
e = numpy.array([-1.0, 0])
plot(theta*180/numpy.pi, n.rates(numpy.dot(x.T, e), bias=1., gain=0.2)) #bias 1->1.5
xlabel('angle')
ylabel('firing rate')
xlim(-180, 180)
show()
Explanation: Possible questions
How many neurons do we need for a particular level of accuracy?
What happens with different firing rates?
What happens with different distributions of x-intercepts?
Example 2: Arm Movements (2D)
Georgopoulos et al., 1982. "On the relations between the direction of two-dimensional arm movements and cell discharge in primate motor cortex."
<img src="files/lecture2/armmovement1.jpg">
<img src="files/lecture2/armmovement2.png">
<img src="files/lecture2/armtuningcurve.png">
System Description
What is being represented?
$x$ is the hand position
Note that this is different from what Georgopoulos talks about in this initial paper
Initial paper only looks at those 8 positions, so it only talks about direction of movement (angle but not magnitude)
More recent work in the same area shows the cells do respond to both (Fu et al, 1993; Messier and Kalaska, 2000)
Bell-shaped tuning curves
Encoders: randomly distributed around the unit circle
Firing rates of up to 60Hz
Design Specification
Range of values for $x$: Anywhere within a unit circle (or perhaps some other radius)
Normal levels of noise: $\sigma$ is 20% of maximum firing rate
the book goes a bit higher, with $\sigma^2=0.1$, meaning that $\sigma = \sqrt{0.1} \approx 0.32$ times the maximum
Implementation
Examine the tuning curves
End of explanation |
11,828 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference
Step1: imports for Python, Pandas
Step2: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source
Step3: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source
Step4: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
Step5: Top 10 Countries with most projects | Python Code:
import pandas as pd
Explanation: JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference: http://pandas-docs.github.io/pandas-docs-travis/io.html#json
data source: http://jsonstudio.com/resources/
End of explanation
import json
from pandas.io.json import json_normalize
Explanation: imports for Python, Pandas
End of explanation
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
Explanation: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas-docs.github.io/pandas-docs-travis/io.html#normalization
End of explanation
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
Explanation: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/
End of explanation
# load json data frame
dataFrame = pd.read_json('data/world_bank_projects.json')
dataFrame
dataFrame.info()
dataFrame.columns
Explanation: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
End of explanation
dataFrame.groupby(dataFrame.countryshortname).count().sort('_id', ascending=False).head(10)
themeNameCode = []
for codes in dataFrame.mjtheme_namecode:
themeNameCode += codes
themeNameCode = json_normalize(themeNameCode)
themeNameCode['count']=themeNameCode.groupby('code').transform('count')
themeNameCode.sort('count').drop_duplicates().head(10)
dataFrame = pd.read_json('data/world_bank_projects.json')
#Create dictionary Code:Name to replace empty names.
codeNameDict = {}
for codes in dataFrame.mjtheme_namecode:
for code in codes:
if code['name']!='':
codeNameDict[code['code']]=code['name']
index=0
for codes in dataFrame.mjtheme_namecode:
innerIndex=0
for code in codes:
if code['name']=='':
print ("Code name empty ", code['code'])
dataFrame.mjtheme_namecode[index][innerIndex]['name']=codeNameDict[code['code']]
innerIndex += 1
index += 1
dataFrame.mjtheme_namecode
themeNameCode = []
for codes in dataFrame.mjtheme_namecode:
print (code)
themeNameCode += code
themeNameCode
# themeNameCode = json_normalize(themeNameCode)
# themeNameCode['count']=themeNameCode.groupby('code').transform('count')
# themeNameCode.sort('count').drop_duplicates().head(10)
Explanation: Top 10 Countries with most projects
End of explanation |
11,829 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Neural Network Exercises - SOLUTIONS
For these exercises we'll perform a binary classification on the Census Income dataset available from the <a href = 'http
Step1: 1. Separate continuous, categorical and label column names
You should find that there are 5 categorical columns, 2 continuous columns and 1 label.<br>
In the case of <em>education</em> and <em>education-num</em> it doesn't matter which column you use. For the label column, be sure to use <em>label</em> and not <em>income</em>.<br>
Assign the variable names "cat_cols", "cont_cols" and "y_col" to the lists of names.
Step2: 2. Convert categorical columns to category dtypes
Step3: Optional
Step4: 3. Set the embedding sizes
Create a variable "cat_szs" to hold the number of categories in each variable.<br>
Then create a variable "emb_szs" to hold the list of (category size, embedding size) tuples.
Step5: 4. Create an array of categorical values
Create a NumPy array called "cats" that contains a stack of each categorical column <tt>.cat.codes.values</tt><br>
Note
Step6: 5. Convert "cats" to a tensor
Convert the "cats" NumPy array to a tensor of dtype <tt>int64</tt>
Step7: 6. Create an array of continuous values
Create a NumPy array called "conts" that contains a stack of each continuous column.<br>
Note
Step8: 7. Convert "conts" to a tensor
Convert the "conts" NumPy array to a tensor of dtype <tt>float32</tt>
Step9: 8. Create a label tensor
Create a tensor called "y" from the values in the label column. Be sure to flatten the tensor so that it can be passed into the CE Loss function.
Step10: 9. Create train and test sets from <tt>cats</tt>, <tt>conts</tt>, and <tt>y</tt>
We use the entire batch of 30,000 records, but a smaller batch size will save time during training.<br>
We used a test size of 5,000 records, but you can choose another fixed value or a percentage of the batch size.<br>
Make sure that your test records remain separate from your training records, without overlap.<br>
To make coding slices easier, we recommend assigning batch and test sizes to simple variables like "b" and "t".
Step11: Define the model class
Run the cell below to define the TabularModel model class we've used before.
Step12: 10. Set the random seed
To obtain results that can be recreated, set a torch manual_seed (we used 33).
Step13: 11. Create a TabularModel instance
Create an instance called "model" with one hidden layer containing 50 neurons and a dropout layer p-value of 0.4
Step14: 12. Define the loss and optimization functions
Create a loss function called "criterion" using CrossEntropyLoss<br>
Create an optimization function called "optimizer" using Adam, with a learning rate of 0.001
Step15: Train the model
Run the cell below to train the model through 300 epochs. Remember, results may vary!<br>
After completing the exercises, feel free to come back to this section and experiment with different parameters.
Step16: 13. Plot the Cross Entropy Loss against epochs
Results may vary. The shape of the plot is what matters.
Step17: 14. Evaluate the test set
With torch set to <tt>no_grad</tt>, pass <tt>cat_test</tt> and <tt>con_test</tt> through the trained model. Create a validation set called "y_val". Compare the output to <tt>y_test</tt> using the loss function defined above. Results may vary.
Step18: 15. Calculate the overall percent accuracy
Using a for loop, compare the argmax values of the <tt>y_val</tt> validation set to the <tt>y_test</tt> set.
Step19: BONUS | Python Code:
import torch
import torch.nn as nn
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.utils import shuffle
%matplotlib inline
df = pd.read_csv('../Data/income.csv')
print(len(df))
df.head()
df['label'].value_counts()
Explanation: <img src="../Pierian-Data-Logo.PNG">
<br>
<strong><center>Copyright 2019. Created by Jose Marcial Portilla.</center></strong>
Neural Network Exercises - SOLUTIONS
For these exercises we'll perform a binary classification on the Census Income dataset available from the <a href = 'http://archive.ics.uci.edu/ml/datasets/Adult'>UC Irvine Machine Learning Repository</a><br>
The goal is to determine if an individual earns more than $50K based on a set of continuous and categorical variables.
<div class="alert alert-danger" style="margin: 10px"><strong>IMPORTANT NOTE!</strong> Make sure you don't run the cells directly above the example output shown, <br>otherwise you will end up writing over the example output!</div>
Census Income Dataset
For this exercises we're using the Census Income dataset available from the <a href='http://archive.ics.uci.edu/ml/datasets/Adult'>UC Irvine Machine Learning Repository</a>.
The full dataset has 48,842 entries. For this exercise we have reduced the number of records, fields and field entries, and have removed entries with null values. The file <strong>income.csv</strong> has 30,000 entries
Each entry contains the following information about an individual:
* <strong>age</strong>: the age of an individual as an integer from 18 to 90 (continuous)
* <strong>sex</strong>: Male or Female (categorical)
* <strong>education</strong>: represents the highest level of education achieved by an individual (categorical)
* <strong>education_num</strong>: represents education as an integer from 3 to 16 (categorical)
<div><table style="display: inline-block">
<tr><td>3</td><td>5th-6th</td><td>8</td><td>12th</td><td>13</td><td>Bachelors</td></tr>
<tr><td>4</td><td>7th-8th</td><td>9</td><td>HS-grad</td><td>14</td><td>Masters</td></tr>
<tr><td>5</td><td>9th</td><td>10</td><td>Some-college</td><td>15</td><td>Prof-school</td></tr>
<tr><td>6</td><td>10th</td><td>11</td><td>Assoc-voc</td><td>16</td><td>Doctorate</td></tr>
<tr><td>7</td><td>11th</td><td>12</td><td>Assoc-acdm</td></tr>
</table></div>
<strong>marital-status</strong>: marital status of an individual (categorical)
<div><table style="display: inline-block">
<tr><td>Married</td><td>Divorced</td><td>Married-spouse-absent</td></tr>
<tr><td>Separated</td><td>Widowed</td><td>Never-married</td></tr>
</table></div>
<strong>workclass</strong>: a general term to represent the employment status of an individual (categorical)
<div><table style="display: inline-block">
<tr><td>Local-gov</td><td>Private</td></tr>
<tr><td>State-gov</td><td>Self-emp</td></tr>
<tr><td>Federal-gov</td></tr>
</table></div>
<strong>occupation</strong>: the general type of occupation of an individual (categorical)
<div><table style="display: inline-block">
<tr><td>Adm-clerical</td><td>Handlers-cleaners</td><td>Protective-serv</td></tr>
<tr><td>Craft-repair</td><td>Machine-op-inspct</td><td>Sales</td></tr>
<tr><td>Exec-managerial</td><td>Other-service</td><td>Tech-support</td></tr>
<tr><td>Farming-fishing</td><td>Prof-specialty</td><td>Transport-moving</td></tr>
</table></div>
<strong>hours-per-week</strong>: the hours an individual has reported to work per week as an integer from 20 to 90 (continuous)
<strong>income</strong>: whether or not an individual makes more than \$50,000 annually (label)
<strong>label</strong>: income represented as an integer (0: <=\$50K, 1: >\$50K) (optional label)
Perform standard imports
Run the cell below to load the libraries needed for this exercise and the Census Income dataset.
End of explanation
df.columns
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS:
print(f'cat_cols has {len(cat_cols)} columns')
print(f'cont_cols has {len(cont_cols)} columns')
print(f'y_col has {len(y_col)} column')
# DON'T WRITE HERE
cat_cols = ['sex', 'education', 'marital-status', 'workclass', 'occupation']
cont_cols = ['age', 'hours-per-week']
y_col = ['label']
print(f'cat_cols has {len(cat_cols)} columns') # 5
print(f'cont_cols has {len(cont_cols)} columns') # 2
print(f'y_col has {len(y_col)} column') # 1
Explanation: 1. Separate continuous, categorical and label column names
You should find that there are 5 categorical columns, 2 continuous columns and 1 label.<br>
In the case of <em>education</em> and <em>education-num</em> it doesn't matter which column you use. For the label column, be sure to use <em>label</em> and not <em>income</em>.<br>
Assign the variable names "cat_cols", "cont_cols" and "y_col" to the lists of names.
End of explanation
# CODE HERE
# DON'T WRITE HERE
for cat in cat_cols:
df[cat] = df[cat].astype('category')
Explanation: 2. Convert categorical columns to category dtypes
End of explanation
# THIS CELL IS OPTIONAL
df = shuffle(df, random_state=101)
df.reset_index(drop=True, inplace=True)
df.head()
Explanation: Optional: Shuffle the dataset
The <strong>income.csv</strong> dataset is already shuffled. However, if you would like to try different configurations after completing the exercises, this is where you would want to shuffle the entire set.
End of explanation
# CODE HERE
# DON'T WRITE HERE
cat_szs = [len(df[col].cat.categories) for col in cat_cols]
emb_szs = [(size, min(50, (size+1)//2)) for size in cat_szs]
emb_szs
Explanation: 3. Set the embedding sizes
Create a variable "cat_szs" to hold the number of categories in each variable.<br>
Then create a variable "emb_szs" to hold the list of (category size, embedding size) tuples.
End of explanation
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
cats[:5]
# DON'T WRITE HERE
sx = df['sex'].cat.codes.values
ed = df['education'].cat.codes.values
ms = df['marital-status'].cat.codes.values
wc = df['workclass'].cat.codes.values
oc = df['occupation'].cat.codes.values
cats = np.stack([sx,ed,ms,wc,oc], 1)
cats[:5]
Explanation: 4. Create an array of categorical values
Create a NumPy array called "cats" that contains a stack of each categorical column <tt>.cat.codes.values</tt><br>
Note: your output may contain different values. Ours came after performing the shuffle step shown above.
End of explanation
# CODE HERE
# DON'T WRITE HERE
cats = torch.tensor(cats, dtype=torch.int64)
Explanation: 5. Convert "cats" to a tensor
Convert the "cats" NumPy array to a tensor of dtype <tt>int64</tt>
End of explanation
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
conts[:5]
# DON'T WRITE HERE
conts = np.stack([df[col].values for col in cont_cols], 1)
conts[:5]
Explanation: 6. Create an array of continuous values
Create a NumPy array called "conts" that contains a stack of each continuous column.<br>
Note: your output may contain different values. Ours came after performing the shuffle step shown above.
End of explanation
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
conts.dtype
# DON'T WRITE HERE
conts = torch.tensor(conts, dtype=torch.float)
conts.dtype
Explanation: 7. Convert "conts" to a tensor
Convert the "conts" NumPy array to a tensor of dtype <tt>float32</tt>
End of explanation
# CODE HERE
# DON'T WRITE HERE
y = torch.tensor(df[y_col].values).flatten()
Explanation: 8. Create a label tensor
Create a tensor called "y" from the values in the label column. Be sure to flatten the tensor so that it can be passed into the CE Loss function.
End of explanation
# CODE HERE
b = 30000 # suggested batch size
t = 5000 # suggested test size
# DON'T WRITE HERE
b = 30000 # suggested batch size
t = 5000 # suggested test size
cat_train = cats[:b-t]
cat_test = cats[b-t:b]
con_train = conts[:b-t]
con_test = conts[b-t:b]
y_train = y[:b-t]
y_test = y[b-t:b]
Explanation: 9. Create train and test sets from <tt>cats</tt>, <tt>conts</tt>, and <tt>y</tt>
We use the entire batch of 30,000 records, but a smaller batch size will save time during training.<br>
We used a test size of 5,000 records, but you can choose another fixed value or a percentage of the batch size.<br>
Make sure that your test records remain separate from your training records, without overlap.<br>
To make coding slices easier, we recommend assigning batch and test sizes to simple variables like "b" and "t".
End of explanation
class TabularModel(nn.Module):
def __init__(self, emb_szs, n_cont, out_sz, layers, p=0.5):
# Call the parent __init__
super().__init__()
# Set up the embedding, dropout, and batch normalization layer attributes
self.embeds = nn.ModuleList([nn.Embedding(ni, nf) for ni,nf in emb_szs])
self.emb_drop = nn.Dropout(p)
self.bn_cont = nn.BatchNorm1d(n_cont)
# Assign a variable to hold a list of layers
layerlist = []
# Assign a variable to store the number of embedding and continuous layers
n_emb = sum((nf for ni,nf in emb_szs))
n_in = n_emb + n_cont
# Iterate through the passed-in "layers" parameter (ie, [200,100]) to build a list of layers
for i in layers:
layerlist.append(nn.Linear(n_in,i))
layerlist.append(nn.ReLU(inplace=True))
layerlist.append(nn.BatchNorm1d(i))
layerlist.append(nn.Dropout(p))
n_in = i
layerlist.append(nn.Linear(layers[-1],out_sz))
# Convert the list of layers into an attribute
self.layers = nn.Sequential(*layerlist)
def forward(self, x_cat, x_cont):
# Extract embedding values from the incoming categorical data
embeddings = []
for i,e in enumerate(self.embeds):
embeddings.append(e(x_cat[:,i]))
x = torch.cat(embeddings, 1)
# Perform an initial dropout on the embeddings
x = self.emb_drop(x)
# Normalize the incoming continuous data
x_cont = self.bn_cont(x_cont)
x = torch.cat([x, x_cont], 1)
# Set up model layers
x = self.layers(x)
return x
Explanation: Define the model class
Run the cell below to define the TabularModel model class we've used before.
End of explanation
# CODE HERE
# DON'T WRITE HERE
torch.manual_seed(33)
Explanation: 10. Set the random seed
To obtain results that can be recreated, set a torch manual_seed (we used 33).
End of explanation
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
model
# DON'T WRITE HERE
model = TabularModel(emb_szs, conts.shape[1], 2, [50], p=0.4)
model
Explanation: 11. Create a TabularModel instance
Create an instance called "model" with one hidden layer containing 50 neurons and a dropout layer p-value of 0.4
End of explanation
# CODE HERE
# DON'T WRITE HERE
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
Explanation: 12. Define the loss and optimization functions
Create a loss function called "criterion" using CrossEntropyLoss<br>
Create an optimization function called "optimizer" using Adam, with a learning rate of 0.001
End of explanation
import time
start_time = time.time()
epochs = 300
losses = []
for i in range(epochs):
i+=1
y_pred = model(cat_train, con_train)
loss = criterion(y_pred, y_train)
losses.append(loss)
# a neat trick to save screen space:
if i%25 == 1:
print(f'epoch: {i:3} loss: {loss.item():10.8f}')
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch: {i:3} loss: {loss.item():10.8f}') # print the last line
print(f'\nDuration: {time.time() - start_time:.0f} seconds') # print the time elapsed
Explanation: Train the model
Run the cell below to train the model through 300 epochs. Remember, results may vary!<br>
After completing the exercises, feel free to come back to this section and experiment with different parameters.
End of explanation
# CODE HERE
# DON'T WRITE HERE
plt.plot(range(epochs), losses)
plt.ylabel('Cross Entropy Loss')
plt.xlabel('epoch');
Explanation: 13. Plot the Cross Entropy Loss against epochs
Results may vary. The shape of the plot is what matters.
End of explanation
# CODE HERE
# RUN THIS CODE TO COMPARE RESULTS
print(f'CE Loss: {loss:.8f}')
# TO EVALUATE THE TEST SET
with torch.no_grad():
y_val = model(cat_test, con_test)
loss = criterion(y_val, y_test)
print(f'CE Loss: {loss:.8f}')
Explanation: 14. Evaluate the test set
With torch set to <tt>no_grad</tt>, pass <tt>cat_test</tt> and <tt>con_test</tt> through the trained model. Create a validation set called "y_val". Compare the output to <tt>y_test</tt> using the loss function defined above. Results may vary.
End of explanation
# CODE HERE
# DON'T WRITE HERE
rows = len(y_test)
correct = 0
# print(f'{"MODEL OUTPUT":26} ARGMAX Y_TEST')
for i in range(rows):
# print(f'{str(y_val[i]):26} {y_val[i].argmax().item():^7}{y_test[i]:^7}')
if y_val[i].argmax().item() == y_test[i]:
correct += 1
print(f'\n{correct} out of {rows} = {100*correct/rows:.2f}% correct')
Explanation: 15. Calculate the overall percent accuracy
Using a for loop, compare the argmax values of the <tt>y_val</tt> validation set to the <tt>y_test</tt> set.
End of explanation
# WRITE YOUR CODE HERE:
# RUN YOUR CODE HERE:
# DON'T WRITE HERE
def test_data(mdl): # pass in the name of the model
# INPUT NEW DATA
age = float(input("What is the person's age? (18-90) "))
sex = input("What is the person's sex? (Male/Female) ").capitalize()
edn = int(input("What is the person's education level? (3-16) "))
mar = input("What is the person's marital status? ").capitalize()
wrk = input("What is the person's workclass? ").capitalize()
occ = input("What is the person's occupation? ").capitalize()
hrs = float(input("How many hours/week are worked? (20-90) "))
# PREPROCESS THE DATA
sex_d = {'Female':0, 'Male':1}
mar_d = {'Divorced':0, 'Married':1, 'Married-spouse-absent':2, 'Never-married':3, 'Separated':4, 'Widowed':5}
wrk_d = {'Federal-gov':0, 'Local-gov':1, 'Private':2, 'Self-emp':3, 'State-gov':4}
occ_d = {'Adm-clerical':0, 'Craft-repair':1, 'Exec-managerial':2, 'Farming-fishing':3, 'Handlers-cleaners':4,
'Machine-op-inspct':5, 'Other-service':6, 'Prof-specialty':7, 'Protective-serv':8, 'Sales':9,
'Tech-support':10, 'Transport-moving':11}
sex = sex_d[sex]
mar = mar_d[mar]
wrk = wrk_d[wrk]
occ = occ_d[occ]
# CREATE CAT AND CONT TENSORS
cats = torch.tensor([sex,edn,mar,wrk,occ], dtype=torch.int64).reshape(1,-1)
conts = torch.tensor([age,hrs], dtype=torch.float).reshape(1,-1)
# SET MODEL TO EVAL (in case this hasn't been done)
mdl.eval()
# PASS NEW DATA THROUGH THE MODEL WITHOUT PERFORMING A BACKPROP
with torch.no_grad():
z = mdl(cats, conts).argmax().item()
print(f'\nThe predicted label is {z}')
test_data(model)
Explanation: BONUS: Feed new data through the trained model
See if you can write a function that allows a user to input their own values, and generates a prediction.<br>
<strong>HINT</strong>:<br>There's no need to build a DataFrame. You can use inputs to populate column variables, convert them to embeddings with a context dictionary, and pass the embedded values directly into the tensor constructors:<br>
<pre>mar = input("What is the person's marital status? ")
mar_d = dict(Divorced=0, Married=1, Married-spouse-absent=2, Never-married=3, Separated=4, Widowed=5)
mar = mar_d[mar]
cats = torch.tensor([..., ..., mar, ..., ...], dtype=torch.int64).reshape(1,-1)</pre>
Make sure that names are put in alphabetical order before assigning numbers.
Also, be sure to run <tt>model.eval()</tt> before passing new date through. Good luck!
End of explanation |
11,830 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Getting Started with ActivitySim
This getting started guide is a Jupyter notebook. It is an interactive Python 3 environment that describes how to set up, run, and begin to analyze the results of ActivitySim modeling scenarios. It is assumed users of ActivitySim are familiar with the basic concepts of activity-based modeling. This tutorial covers
Step1: Activate the Environment
Step2: Creating an Example Setup
The example is included in the package and can be copied to a user defined location using the package's command line interface. The example includes all model steps. The command below copies the example_mtc example to a new example folder. It also changes into the new example folder so we can run the model from there.
Step3: Run the Example
The code below runs the example, which runs in a few minutes. The example consists of 100 synthetic households and the first 25 zones in the example model region. The full example (example_mtc_full) can be created and downloaded from the activitysim resources repository using activitysim's create command above. As the model runs, it logs information to the screen.
To run the example, use activitysim's built-in run command. As shown in the script help, the default settings assume a configs, data, and output folder in the current directory.
Step4: Inputs and Outputs Overview
An ActivitySim model requires
Step5: Inputs
Run the commands below to
Step6: Outputs
Run the commands below to
Step7: Other notable outputs
Step8: Trip matrices
A write_trip_matrices step at the end of the model adds boolean indicator columns to the trip table in order to assign each trip into a trip matrix and then aggregates the trip counts and writes OD matrices to OMX (open matrix) files. The coding of trips into trip matrices is done via annotation expressions.
Step9: Tracing calculations
Tracing calculations is an important part of model setup and debugging. Often times data issues, such as missing values in input data and/or incorrect submodel expression files, do not reveal themselves until a downstream submodels fails. There are two types of tracing in ActivtiySim
Step10: Run the Multiprocessor Example
The command below runs the multiprocessor example, which runs in a few minutes. It uses settings inheritance to override setings in the configs folder with settings in the configs_mp folder. This allows for re-using expression files and settings files in the single and multiprocessed setups. The multiprocessed example uses the following additional settings | Python Code:
!conda create -n asim python=3.9 activitysim -c conda-forge --override-channels
Explanation: Getting Started with ActivitySim
This getting started guide is a Jupyter notebook. It is an interactive Python 3 environment that describes how to set up, run, and begin to analyze the results of ActivitySim modeling scenarios. It is assumed users of ActivitySim are familiar with the basic concepts of activity-based modeling. This tutorial covers:
Installation and setup
Setting up and running a base model
Inputs and outputs
Setting up and running an alternative scenario
Comparing results
Next steps and further reading
This notebook depends on Anaconda Python 3 64bit.
Install ActivitySim
The first step is to install activitysim from conda forge. This also installs dependent packages such as tables for reading/writing HDF5, openmatrix for reading/writing OMX matrix, and pyyaml for yaml settings files. The command below also creates an asim conda environment just for activitysim.
End of explanation
!conda activate asim
Explanation: Activate the Environment
End of explanation
!activitysim create -e example_mtc -d example
%cd example
Explanation: Creating an Example Setup
The example is included in the package and can be copied to a user defined location using the package's command line interface. The example includes all model steps. The command below copies the example_mtc example to a new example folder. It also changes into the new example folder so we can run the model from there.
End of explanation
!activitysim run -c configs -d data -o output
Explanation: Run the Example
The code below runs the example, which runs in a few minutes. The example consists of 100 synthetic households and the first 25 zones in the example model region. The full example (example_mtc_full) can be created and downloaded from the activitysim resources repository using activitysim's create command above. As the model runs, it logs information to the screen.
To run the example, use activitysim's built-in run command. As shown in the script help, the default settings assume a configs, data, and output folder in the current directory.
End of explanation
import os
for root, dirs, files in os.walk(".", topdown=False):
for name in files:
print(os.path.join(root, name))
for name in dirs:
print(os.path.join(root, name))
Explanation: Inputs and Outputs Overview
An ActivitySim model requires:
Configs: settings, model step expressions files, etc.
settings.yaml - main settings file for running the model
network_los.yaml - network level-of-service (skims) settings file
[model].yaml - configuration file for the model step (such as auto ownership)
[model].csv - expressions file for the model step
Data: input data - input data tables and skims
land_use.csv - zone data file
households.csv - synthethic households
persons.csv - synthethic persons
skims.omx - all skims in one open matrix file
Output: output data - output data, tables, tracing info, etc.
pipeline.h5 - data pipeline database file (all tables at each model step)
final_[table].csv - final household, person, tour, trip CSV tables
activitysim.log - console log file
trace.[model].csv - trace calculations for select households
simulation.py: main script to run the model
Run the command below to list the example folder contents.
End of explanation
print("Load libraries.")
import pandas as pd
import openmatrix as omx
import yaml
import glob
print("Display the settings file.\n")
with open(r'configs/settings.yaml') as file:
file_contents = yaml.load(file, Loader=yaml.FullLoader)
print(yaml.dump(file_contents))
print("Display the network_los file.\n")
with open(r'configs/network_los.yaml') as file:
file_contents = yaml.load(file, Loader=yaml.FullLoader)
print(yaml.dump(file_contents))
print("Input land_use. Primary key: TAZ. Required additional fields depend on the downstream submodels (and expression files).")
pd.read_csv("data/land_use.csv")
print("Input households. Primary key: HHID. Foreign key: TAZ. Required additional fields depend on the downstream submodels (and expression files).")
pd.read_csv("data/households.csv")
print("Input persons. Primary key: PERID. Foreign key: household_id. Required additional fields depend on the downstream submodels (and expression files).")
pd.read_csv("data/persons.csv")
print("Skims. All skims are input via one OMX file. Required skims depend on the downstream submodels (and expression files).\n")
print(omx.open_file("data/skims.omx"))
Explanation: Inputs
Run the commands below to:
* Load required Python libraries for reading data
* Display the settings.yaml, including the list of models to run
* Display the land_use, households, and persons tables
* Display the skims
End of explanation
print("The output pipeline contains the state of each table after each model step.")
pipeline = pd.io.pytables.HDFStore('output/pipeline.h5')
pipeline.keys()
print("Households table after trip mode choice, which contains several calculated fields.")
pipeline['/households/joint_tour_frequency'] #watch out for key changes if not running all models
print("Final output households table to written to CSV, which is the same as the table in the pipeline.")
pd.read_csv("output/final_households.csv")
print("Final output persons table to written to CSV, which is the same as the table in the pipeline.")
pd.read_csv("output/final_persons.csv")
print("Final output tours table to written to CSV, which is the same as the table in the pipeline. Joint tours are stored as one record.")
pd.read_csv("output/final_tours.csv")
print("Final output trips table to written to CSV, which is the same as the table in the pipeline. Joint trips are stored as one record")
pd.read_csv("output/final_trips.csv")
Explanation: Outputs
Run the commands below to:
* Display the output household and person tables
* Display the output tour and trip tables
End of explanation
print("Final output accessibility table to written to CSV.")
pd.read_csv("output/final_accessibility.csv")
print("Joint tour participants table, which contains the person ids of joint tour participants.")
pipeline['joint_tour_participants/joint_tour_participation']
print("Destination choice sample logsums table for school location if want_dest_choice_sample_tables=True.")
if '/school_location_sample/school_location' in pipeline:
pipeline['/school_location_sample/school_location']
Explanation: Other notable outputs
End of explanation
print("trip matrices by time of day for assignment")
output_files = os.listdir("output")
for output_file in output_files:
if "omx" in output_file:
print(output_file)
Explanation: Trip matrices
A write_trip_matrices step at the end of the model adds boolean indicator columns to the trip table in order to assign each trip into a trip matrix and then aggregates the trip counts and writes OD matrices to OMX (open matrix) files. The coding of trips into trip matrices is done via annotation expressions.
End of explanation
print("All trace files.\n")
glob.glob("output/trace/*.csv")
print("Trace files for auto ownership.\n")
glob.glob("output/trace/auto_ownership*.csv")
print("Trace chooser data for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.choosers.csv")
print("Trace utility expression values for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.eval_utils.expression_values.csv")
print("Trace alternative total utilities for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.utilities.csv")
print("Trace alternative probabilities for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.probs.csv")
print("Trace random number for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.rands.csv")
print("Trace choice for auto ownership.\n")
pd.read_csv("output\\trace\\auto_ownership_simulate.simple_simulate.eval_mnl.choices.csv")
Explanation: Tracing calculations
Tracing calculations is an important part of model setup and debugging. Often times data issues, such as missing values in input data and/or incorrect submodel expression files, do not reveal themselves until a downstream submodels fails. There are two types of tracing in ActivtiySim: household and origin-destination (OD) pair. If a household trace ID is specified via trace_hh_id, then ActivitySim will output a comprehensive set of trace files for all calculations for all household members. These trace files are listed below and explained.
End of explanation
!activitysim run -c configs_mp -c configs -d data -o output
Explanation: Run the Multiprocessor Example
The command below runs the multiprocessor example, which runs in a few minutes. It uses settings inheritance to override setings in the configs folder with settings in the configs_mp folder. This allows for re-using expression files and settings files in the single and multiprocessed setups. The multiprocessed example uses the following additional settings:
```
num_processes: 2
chunk_size: 0
chunk_training_mode: disabled
multiprocess_steps:
- name: mp_initialize
begin: initialize_landuse
- name: mp_households
begin: school_location
slice:
tables:
- households
- persons
- name: mp_summarize
begin: write_data_dictionary
```
In brief, num_processes specifies the number of processors to use and a chunk_size of 0 plus a chunk_training_mode of disabled means ActivitySim is free to use all the available RAM if needed. The multiprocess_steps specifies the beginning, middle, and end steps in multiprocessing. The mp_initialize step is single processed because there is no slice setting. It starts with the initialize_landuse submodel and runs until the submodel identified by the next multiprocess submodel starting point, school_location. The mp_households step is multiprocessed and the households and persons tables are sliced and allocated to processes using the chunking settings. The rest of the submodels are run multiprocessed until the final multiprocess step. The mp_summarize step is single processed because there is no slice setting and it writes outputs. See multiprocessing and chunk_size for more information.
End of explanation |
11,831 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Fixed End Forces
This module computes the fixed end forces (moments and shears) due to transverse loads
acting on a 2-D planar structural member.
Step7: Class EF
Instances of class EF represent the 6 end-forces for a 2-D planar beam element.
The forces (and local degrees of freedom) are numbered 0 through 5, and are shown here in their
positive directions on a beam-element of length L. The 6 forces are labelled by prefixing the number with a letter to suggest the normal interpretation of that force
Step8: Now define properties so that the individual components can be accessed like name atrributes,
eg
Step14: Class MemberLoad
This is the base class for all the different types of member loads (point loads, UDLs, etc.)
of 2D planar beam elements.
The main purpose is to calculate the fixed-end member forces, but we will also supply
logic to enable calculation of internal shears and moments at any point along the span.
All types of member loads will be input using a table containing five data columns
Step15: Load Type PL
Load type PL represents a single concentrated force, of magnitude P, at a distance a from the j-end
Step16: Load Type PLA
Load type PLA represents a single concentrated force applied parallel to the length
of the segment (producing only axial forces).
Step17: Load Type UDL
Load type UDL represents a uniformly distributed load, of magnitude w, over the complete length of the element.
Step19: Load Type LVL
Load type LVL represents a linearly varying distributed load actiong over a portion of the span
Step20: Load Type CM
Load type CM represents a single concentrated moment of magnitude M a distance a from the j-end
Step21: makeMemberLoad() factory function
Finally, the function makeMemberLoad() will create a load object of the correct type from
the data in dictionary data. That dictionary would normally containing the data from one
row ov the input data file table. | Python Code:
import numpy as np
import sys
from salib import extend
Explanation: Fixed End Forces
This module computes the fixed end forces (moments and shears) due to transverse loads
acting on a 2-D planar structural member.
End of explanation
class EF(object):
Class EF represents the 6 end forces acting on a 2-D, planar, beam element.
def __init__(self,c0=0.,v1=0.,m2=0.,c3=0.,v4=0.,m5=0.):
Initialize an instance with the 6 end forces. If the first
argument is a 6-element array, initialize from a copy of that
array and ignore any other arguments.
if np.isscalar(c0):
self.fefs = np.matrix([c0,v1,m2,c3,v4,m5],dtype=np.float64).T
else:
self.fefs = c0.copy()
def __getitem__(self,ix):
Retreive one of the forces by numer. This allows allows unpacking
of all 6 end forces into 6 variables using something like:
c0,v1,m2,c3,v4,m5 = self
return self.fefs[ix,0]
def __add__(self,other):
Add this set of end forces to another, returning the sum.
assert type(self) is type(other)
new = self.__class__(self.fefs+other.fefs)
return new
def __sub__(self,other):
Subtract the other from this set of forces, returning the difference.
assert type(self) is type(other)
new = self.__class__(self.fefs-other.fefs)
return new
def __mul__(self,scale):
Multiply this set of forces by the scalar value, returning the product.
if scale == 1.0:
return self
return self.__class__(self.fefs*scale)
__rmul__ = __mul__
def __repr__(self):
return '{}({},{},{},{},{},{})'.format(self.__class__.__name__,*list(np.array(self.fefs.T)[0]))
##test:
f = EF(1,2,0,4,1,6)
f
##test:
g = f+f+f
g
##test:
f[1]
##test:
f[np.ix_([3,0,1])]
##test:
g[(3,0,1)]
##test:
f0,f1,f2,f3,f4,f5 = g
f3
##test:
g, g*5, 5*g
Explanation: Class EF
Instances of class EF represent the 6 end-forces for a 2-D planar beam element.
The forces (and local degrees of freedom) are numbered 0 through 5, and are shown here in their
positive directions on a beam-element of length L. The 6 forces are labelled by prefixing the number with a letter to suggest the normal interpretation of that force: c for axial force,
v for shear force, and m for moment.
For use in this module, the end forces will be fixed-end-forces.
End of explanation
@extend
class EF:
@property
def c0(self):
return self.fefs[0,0]
@c0.setter
def c0(self,v):
self.fefs[0,0] = v
@property
def v1(self):
return self.fefs[1,0]
@v1.setter
def v1(self,v):
self.fefs[1,0] = v
@property
def m2(self):
return self.fefs[2,0]
@m2.setter
def m2(self,v):
self.fefs[2,0] = v
@property
def c3(self):
return self.fefs[3,0]
@c3.setter
def c3(self,v):
self.fefs[3,0] = v
@property
def v4(self):
return self.fefs[4,0]
@v4.setter
def v4(self,v):
self.fefs[4,0] = v
@property
def m5(self):
return self.fefs[5,0]
@m5.setter
def m5(self,v):
self.fefs[5,0] = v
##test:
f = EF(10.,11,12,13,15,15)
f, f.c0, f.v1, f.m2, f.c3, f.v4, f.m5
##test:
f.c0 *= 2
f.v1 *= 3
f.m2 *= 4
f.c3 *= 5
f.v4 *= 6
f.m5 *= 7
f
Explanation: Now define properties so that the individual components can be accessed like name atrributes,
eg: 'ef.m3' or 'ef.m5 = 100.'.
End of explanation
class MemberLoad(object):
TABLE_MAP = {} # map from load parameter names to column names in table
def fefs(self):
Return the complete set of 6 fixed end forces produced by the load.
raise NotImplementedError()
def shear(self,x):
Return the shear force that is in equilibrium with that
produced by the portion of the load to the left of the point at
distance 'x'. 'x' may be a scalar or a 1-dimensional array
of values.
raise NotImplementedError()
def moment(self,x):
Return the bending moment that is in equilibrium with that
produced by the portion of the load to the left of the point at
distance 'x'. 'x' may be a scalar or a 1-dimensional array
of values.
raise NotImplementedError()
@extend
class MemberLoad:
@property
def vpts(self):
Return a descriptor of the points at which the shear force must
be evaluated in order to draw a proper shear force diagram for this
load. The descriptor is a 3-tuple of the form: (l,r,d) where 'l'
is the leftmost point, 'r' is the rightmost point and 'd' is the
degree of the curve between. One of 'r', 'l' may be None.
raise NotImplementedError()
@property
def mpts(self):
Return a descriptor of the points at which the moment must be
evaluated in order to draw a proper bending moment diagram for this
load. The descriptor is a 3-tuple of the form: (l,r,d) where 'l'
is the leftmost point, 'r' is the rightmost point and 'd' is the
degree of the curve between. One of 'r', 'l' may be None.
raise NotImplementedError()
Explanation: Class MemberLoad
This is the base class for all the different types of member loads (point loads, UDLs, etc.)
of 2D planar beam elements.
The main purpose is to calculate the fixed-end member forces, but we will also supply
logic to enable calculation of internal shears and moments at any point along the span.
All types of member loads will be input using a table containing five data columns:
W1, W2, A, B, and C. Each load type contains a 'TABLE_MAP'
that specifies the mapping between attribute name and column name in the table.
End of explanation
class PL(MemberLoad):
TABLE_MAP = {'P':'W1','a':'A'}
def __init__(self,L,P,a):
self.L = L
self.P = P
self.a = a
def fefs(self):
P = self.P
L = self.L
a = self.a
b = L-a
m2 = -P*a*b*b/(L*L)
m5 = P*a*a*b/(L*L)
v1 = (m2 + m5 - P*b)/L
v4 = -(m2 + m5 + P*a)/L
return EF(0.,v1,m2,0.,v4,m5)
def shear(self,x):
return -self.P*(x>self.a)
def moment(self,x):
return self.P*(x-self.a)*(x>self.a)
def __repr__(self):
return '{}(L={},P={},a={})'.format(self.__class__.__name__,self.L,self.P,self.a)
##test:
p = PL(1000.,300.,400.)
p, p.fefs()
@extend
class MemberLoad:
EPSILON = 1.0E-6
@extend
class PL:
@property
def vpts(self):
return (self.a-self.EPSILON,self.a+self.EPSILON,0)
@property
def mpts(self):
return (self.a,None,1)
##test:
p = PL(1000.,300.,400.)
p.vpts
##test:
p.mpts
Explanation: Load Type PL
Load type PL represents a single concentrated force, of magnitude P, at a distance a from the j-end:
End of explanation
class PLA(MemberLoad):
TABLE_MAP = {'P':'W1','a':'A'}
def __init__(self,L,P,a):
self.L = L
self.P = P
self.a = a
def fefs(self):
P = self.P
L = self.L
a = self.a
c0 = -P*(L-a)/L
c3 = -P*a/L
return EF(c0=c0,c3=c3)
def shear(self,x):
return 0.
def moment(self,x):
return 0.
def __repr__(self):
return '{}(L={},P={},a={})'.format(self.__class__.__name__,self.L,self.P,self.a)
##test:
p = PLA(10.,P=100.,a=4.)
p.fefs()
@extend
class PLA:
@property
def vpts(self):
return (0.,self.L,0)
@property
def mpts(self):
return (0.,self.L,0)
Explanation: Load Type PLA
Load type PLA represents a single concentrated force applied parallel to the length
of the segment (producing only axial forces).
End of explanation
class UDL(MemberLoad):
TABLE_MAP = {'w':'W1'}
def __init__(self,L,w):
self.L = L
self.w = w
def __repr__(self):
return '{}(L={},w={})'.format(self.__class__.__name__,self.L,self.w)
def fefs(self):
L = self.L
w = self.w
return EF(0.,-w*L/2., -w*L*L/12., 0., -w*L/2., w*L*L/12.)
def shear(self,x):
l = x*(x>0.)*(x<=self.L) + self.L*(x>self.L) # length of loaded portion
return -(l*self.w)
def moment(self,x):
l = x*(x>0.)*(x<=self.L) + self.L*(x>self.L) # length of loaded portion
d = (x-self.L)*(x>self.L) # distance from loaded portion to x: 0 if x <= L else x-L
return self.w*l*(l/2.+d)
@property
def vpts(self):
return (0.,self.L,1)
@property
def mpts(self):
return (0.,self.L,2)
##test:
w = UDL(12,10)
w,w.fefs()
Explanation: Load Type UDL
Load type UDL represents a uniformly distributed load, of magnitude w, over the complete length of the element.
End of explanation
class LVL(MemberLoad):
TABLE_MAP = {'w1':'W1','w2':'W2','a':'A','b':'B','c':'C'}
def __init__(self,L,w1,w2=None,a=None,b=None,c=None):
if a is not None and b is not None and c is not None and L != (a+b+c):
raise Exception('Cannot specify all of a, b & c')
if a is None:
if b is not None and c is not None:
a = L - (b+c)
else:
a = 0.
if c is None:
if b is not None:
c = L - (a+b)
else:
c = 0.
if b is None:
b = L - (a+c)
if w2 is None:
w2 = w1
self.L = L
self.w1 = w1
self.w2 = w2
self.a = a
self.b = b
self.c = c
def fefs(self):
This mess was generated via sympy. See:
../../examples/cive3203-notebooks/FEM-2-Partial-lvl.ipynb
L = float(self.L)
a = self.a
b = self.b
c = self.c
w1 = self.w1
w2 = self.w2
m2 = -b*(15*a*b**2*w1 + 5*a*b**2*w2 + 40*a*b*c*w1 + 20*a*b*c*w2 + 30*a*c**2*w1 + 30*a*c**2*w2 + 3*b**3*w1 + 2*b**3*w2 + 10*b**2*c*w1 + 10*b**2*c*w2 + 10*b*c**2*w1 + 20*b*c**2*w2)/(60.*(a + b + c)**2)
m5 = b*(20*a**2*b*w1 + 10*a**2*b*w2 + 30*a**2*c*w1 + 30*a**2*c*w2 + 10*a*b**2*w1 + 10*a*b**2*w2 + 20*a*b*c*w1 + 40*a*b*c*w2 + 2*b**3*w1 + 3*b**3*w2 + 5*b**2*c*w1 + 15*b**2*c*w2)/(60.*(a + b + c)**2)
v4 = -(b*w1*(a + b/2.) + b*(a + 2*b/3.)*(-w1 + w2)/2. + m2 + m5)/L
v1 = -b*(w1 + w2)/2. - v4
return EF(0.,v1,m2,0.,v4,m5)
def __repr__(self):
return '{}(L={},w1={},w2={},a={},b={},c={})'\
.format(self.__class__.__name__,self.L,self.w1,self.w2,self.a,self.b,self.c)
def shear(self,x):
c = (x>self.a+self.b) # 1 if x > A+B else 0
l = (x-self.a)*(x>self.a)*(1.-c) + self.b*c # length of load portion to the left of x
return -(self.w1 + (self.w2-self.w1)*(l/self.b)/2.)*l
def moment(self,x):
c = (x>self.a+self.b) # 1 if x > A+B else 0
# note: ~c doesn't work if x is scalar, thus we use 1-c
l = (x-self.a)*(x>self.a)*(1.-c) + self.b*c # length of load portion to the left of x
d = (x-(self.a+self.b))*c # distance from right end of load portion to x
return ((self.w1*(d+l/2.)) + (self.w2-self.w1)*(l/self.b)*(d+l/3.)/2.)*l
@property
def vpts(self):
return (self.a,self.a+self.b,1 if self.w1==self.w2 else 2)
@property
def mpts(self):
return (self.a,self.a+self.b,2 if self.w1==self.w2 else 3)
Explanation: Load Type LVL
Load type LVL represents a linearly varying distributed load actiong over a portion of the span:
End of explanation
class CM(MemberLoad):
TABLE_MAP = {'M':'W1','a':'A'}
def __init__(self,L,M,a):
self.L = L
self.M = M
self.a = a
def fefs(self):
L = float(self.L)
A = self.a
B = L - A
M = self.M
m2 = B*(2.*A - B)*M/L**2
m5 = A*(2.*B - A)*M/L**2
v1 = (M + m2 + m5)/L
v4 = -v1
return EF(0,v1,m2,0,v4,m5)
def shear(self,x):
return x*0.
def moment(self,x):
return -self.M*(x>self.A)
@property
def vpts(self):
return (None,None,0)
@property
def mpts(self):
return (self.A-self.EPSILON,self.A+self.EPSILON,1)
def __repr__(self):
return '{}(L={},M={},a={})'.format(self.__class__.__name__,self.L,self.M,self.a)
Explanation: Load Type CM
Load type CM represents a single concentrated moment of magnitude M a distance a from the j-end:
End of explanation
def makeMemberLoad(L,data,ltype=None):
def all_subclasses(cls):
_all_subclasses = []
for subclass in cls.__subclasses__():
_all_subclasses.append(subclass)
_all_subclasses.extend(all_subclasses(subclass))
return _all_subclasses
if ltype is None:
ltype = data.get('TYPE',None)
for c in all_subclasses(MemberLoad):
if c.__name__ == ltype and hasattr(c,'TABLE_MAP'):
MAP = c.TABLE_MAP
argv = {k:data[MAP[k]] for k in MAP.keys()}
return c(L,**argv)
raise Exception('Invalid load type: {}'.format(ltype))
##test:
ml = makeMemberLoad(12,{'TYPE':'UDL', 'W1':10})
ml, ml.fefs()
def unmakeMemberLoad(load):
type = load.__class__.__name__
ans = {'TYPE':type}
for a,col in load.TABLE_MAP.items():
ans[col] = getattr(load,a)
return ans
##test:
unmakeMemberLoad(ml)
Explanation: makeMemberLoad() factory function
Finally, the function makeMemberLoad() will create a load object of the correct type from
the data in dictionary data. That dictionary would normally containing the data from one
row ov the input data file table.
End of explanation |
11,832 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Analyzing the flexible path length simulation
Step1: Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis).
Step2: That tell us a little about the file we're dealing with. Now we'll start analyzing the contents of that file. We used a very simple move scheme (only shooting), so the main information that the move_summary gives us is the acceptance of the only kind of move in that scheme. See the MSTIS examples for more complicated move schemes, where you want to make sure that frequency at which the move runs is close to what was expected.
Step3: Replica history tree and decorrelated trajectories
The ReplicaHistoryTree object gives us both the history tree (often called the "move tree") and the number of decorrelated trajectories.
A ReplicaHistoryTree is made for a certain set of Monte Carlo steps. First, we make a tree of only the first 25 steps in order to visualize it. (All 10000 steps would be unwieldy.)
After the visualization, we make a second PathTree of all the steps. We won't visualize that; instead we use it to count the number of decorrelated trajectories.
Step4: Path length distribution
Flexible length TPS gives a distribution of path lengths. Here we calculate the length of every accepted trajectory, then histogram those lengths, and calculate the maximum and average path lengths.
We also use engine.snapshot_timestep to convert the count of frames to time, including correct units.
Step5: Path density histogram
Next we will create a path density histogram. Calculating the histogram itself is quite easy
Step6: Now we've built the path density histogram, and we want to visualize it. We have a convenient plot_2d_histogram function that works in this case, and takes the histogram, desired plot tick labels and limits, and additional matplotlib named arguments to plt.pcolormesh.
Step7: Convert to MDTraj for analysis by external tools
The trajectory can be converted to an MDTraj trajectory, and then used anywhere that MDTraj can be used. This includes writing it to a file (in any number of file formats) or visualizing the trajectory using, e.g., NGLView. | Python Code:
from __future__ import print_function
%matplotlib inline
import openpathsampling as paths
import numpy as np
import matplotlib.pyplot as plt
import os
import openpathsampling.visualize as ops_vis
from IPython.display import SVG
Explanation: Analyzing the flexible path length simulation
End of explanation
# note that this log will overwrite the log from the previous notebook
#import logging.config
#logging.config.fileConfig("logging.conf", disable_existing_loggers=False)
%%time
flexible = paths.AnalysisStorage("ad_tps.nc")
# opening as AnalysisStorage is a little slower, but speeds up the move_summary
engine = flexible.engines[0]
flex_scheme = flexible.schemes[0]
print("File size: {0} for {1} steps, {2} snapshots".format(
flexible.file_size_str,
len(flexible.steps),
len(flexible.snapshots)
))
Explanation: Load the file, and from the file pull our the engine (which tells us what the timestep was) and the move scheme (which gives us a starting point for much of the analysis).
End of explanation
flex_scheme.move_summary(flexible.steps)
Explanation: That tell us a little about the file we're dealing with. Now we'll start analyzing the contents of that file. We used a very simple move scheme (only shooting), so the main information that the move_summary gives us is the acceptance of the only kind of move in that scheme. See the MSTIS examples for more complicated move schemes, where you want to make sure that frequency at which the move runs is close to what was expected.
End of explanation
replica_history = ops_vis.ReplicaEvolution(replica=0)
tree = ops_vis.PathTree(
flexible.steps[0:25],
replica_history
)
tree.options.css['scale_x'] = 3
SVG(tree.svg())
# can write to svg file and open with programs that can read SVG
with open("flex_tps_tree.svg", 'w') as f:
f.write(tree.svg())
print("Decorrelated trajectories:", len(tree.generator.decorrelated_trajectories))
%%time
full_history = ops_vis.PathTree(
flexible.steps,
ops_vis.ReplicaEvolution(
replica=0
)
)
n_decorrelated = len(full_history.generator.decorrelated_trajectories)
print("All decorrelated trajectories:", n_decorrelated)
Explanation: Replica history tree and decorrelated trajectories
The ReplicaHistoryTree object gives us both the history tree (often called the "move tree") and the number of decorrelated trajectories.
A ReplicaHistoryTree is made for a certain set of Monte Carlo steps. First, we make a tree of only the first 25 steps in order to visualize it. (All 10000 steps would be unwieldy.)
After the visualization, we make a second PathTree of all the steps. We won't visualize that; instead we use it to count the number of decorrelated trajectories.
End of explanation
path_lengths = [len(step.active[0].trajectory) for step in flexible.steps]
plt.hist(path_lengths, bins=40, alpha=0.5);
print("Maximum:", max(path_lengths),
"("+(max(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
print ("Average:", "{0:.2f}".format(np.mean(path_lengths)),
"("+(np.mean(path_lengths)*engine.snapshot_timestep).format("%.3f")+")")
Explanation: Path length distribution
Flexible length TPS gives a distribution of path lengths. Here we calculate the length of every accepted trajectory, then histogram those lengths, and calculate the maximum and average path lengths.
We also use engine.snapshot_timestep to convert the count of frames to time, including correct units.
End of explanation
from openpathsampling.numerics import HistogramPlotter2D
psi = flexible.cvs['psi']
phi = flexible.cvs['phi']
deg = 180.0 / np.pi
path_density = paths.PathDensityHistogram(cvs=[phi, psi],
left_bin_edges=(-180/deg,-180/deg),
bin_widths=(2.0/deg,2.0/deg))
path_dens_counter = path_density.histogram([s.active[0].trajectory for s in flexible.steps])
Explanation: Path density histogram
Next we will create a path density histogram. Calculating the histogram itself is quite easy: first we reload the collective variables we want to plot it in (we choose the phi and psi angles). Then we create the empty path density histogram, by telling it which CVs to use and how to make the histogram (bin sizes, etc). Finally, we build the histogram by giving it the list of active trajectories to histogram.
End of explanation
tick_labels = np.arange(-np.pi, np.pi+0.01, np.pi/4)
plotter = HistogramPlotter2D(path_density,
xticklabels=tick_labels,
yticklabels=tick_labels,
label_format="{:4.2f}")
ax = plotter.plot(cmap="Blues")
Explanation: Now we've built the path density histogram, and we want to visualize it. We have a convenient plot_2d_histogram function that works in this case, and takes the histogram, desired plot tick labels and limits, and additional matplotlib named arguments to plt.pcolormesh.
End of explanation
ops_traj = flexible.steps[1000].active[0].trajectory
traj = ops_traj.to_mdtraj()
traj
# Here's how you would then use NGLView:
#import nglview as nv
#view = nv.show_mdtraj(traj)
#view
flexible.close()
Explanation: Convert to MDTraj for analysis by external tools
The trajectory can be converted to an MDTraj trajectory, and then used anywhere that MDTraj can be used. This includes writing it to a file (in any number of file formats) or visualizing the trajectory using, e.g., NGLView.
End of explanation |
11,833 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotting whitened data
This tutorial demonstrates how to plot
Step1: Raw data with whitening
<div class="alert alert-info"><h4>Note</h4><p>In the
Step2: Epochs with whitening
Step3: Evoked data with whitening
Step4: Evoked data with scaled whitening
The
Step5: Topographic plot with whitening | Python Code:
import mne
from mne.datasets import sample
Explanation: Plotting whitened data
This tutorial demonstrates how to plot :term:whitened <whitening> evoked
data.
Data are whitened for many processes, including dipole fitting, source
localization and some decoding algorithms. Viewing whitened data thus gives
a different perspective on the data that these algorithms operate on.
Let's start by loading some data and computing a signal (spatial) covariance
that we'll consider to be noise.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.find_events(raw, stim_channel='STI 014')
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
epochs = mne.Epochs(raw, events, event_id=event_id, reject=reject)
# baseline noise cov, not a lot of samples
noise_cov = mne.compute_covariance(epochs, tmax=0., method='shrunk', rank=None,
verbose='error')
# butterfly mode shows the differences most clearly
raw.plot(events=events, butterfly=True)
raw.plot(noise_cov=noise_cov, events=events, butterfly=True)
Explanation: Raw data with whitening
<div class="alert alert-info"><h4>Note</h4><p>In the :meth:`mne.io.Raw.plot` with ``noise_cov`` supplied,
you can press they "w" key to turn whitening on and off.</p></div>
End of explanation
epochs.plot()
epochs.plot(noise_cov=noise_cov)
Explanation: Epochs with whitening
End of explanation
evoked = epochs.average()
evoked.plot(time_unit='s')
evoked.plot(noise_cov=noise_cov, time_unit='s')
Explanation: Evoked data with whitening
End of explanation
evoked.plot_white(noise_cov=noise_cov, time_unit='s')
Explanation: Evoked data with scaled whitening
The :meth:mne.Evoked.plot_white function takes an additional step of
scaling the whitened plots to show how well the assumption of Gaussian
noise is satisfied by the data:
End of explanation
evoked.comment = 'All trials'
evoked.plot_topo(title='Evoked data')
evoked.plot_topo(noise_cov=noise_cov, title='Whitened evoked data')
Explanation: Topographic plot with whitening
End of explanation |
11,834 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to Image Analysis with <font color='DarkBlue'>Python.</font>
<font color='brown'>Reading images</font>
Exercise
Step1: <font color='brown'>Reading a multi-page tiff</font>
Exercise
Step2: <font color='brown'>Reading a multi-page tiff with multiple channels</font>
Exercise
Step3: <font color='brown'>Applying a threshold to an image.</font>
Exercise | Python Code:
#This line is very important: (It turns on the inline visuals)!
%pylab inline
#This library is one of the libraries one can use for importing tiff files.
#For detailed info:http://effbot.org/imagingbook/image.htm
from PIL import Image
#We import our cell_fluorescent.tif image
im = Image.open('cell_fluorescent.tif')
#This line converts our image object into a numpy array (matrix).
im_array = np.array(im)
#This is an inline visual. It displays it after your code.
imshow(im_array)
#Notice the scale on the side of the image. What happens when you index a range.
#imshow(im_array[50:100,:])
#Or what happens when you index every fifth pixel:
#imshow(im_array[::5,::5],interpolation='nearest')
#Notice interpolation. What do you thing this is doing?
#Repeat the above step but for the image cell_colony.tif.
#Experiment with changing the look-up-table:
#imshow(im_array, cmap="Reds")
#more colors at: http://matplotlib.org/examples/color/colormaps_reference.html
Explanation: Introduction to Image Analysis with <font color='DarkBlue'>Python.</font>
<font color='brown'>Reading images</font>
Exercise: Explore how to open simple tiff images using the PIL library.
End of explanation
#Make sure you have previously run %pylab inline at least once.
#This library is another one of the libaries we can use to import tiff files
#It also works with formats such as .lsm which are tiff's in disguise.
from tifffile import imread as imreadtiff
#We import our mri-stack.tif image file.
im = imread('mri-stack.tif')
print('image dimensions',im.shape)
#This line converts our image object into a numpy array and then accesses the fifteenth slice.
im_slice = im[15,:,:]
#This activates a subplot which can be used to display more than one image in a grid.
subplot(1,2,1)
imshow(im_slice)
#We can also assess the raw data directly.
im = imreadtiff('mri-stack.tif',key=5)
print('image dimensions',im.shape)
#This line converts our image object into a numpy array (matrix).
im_slice = im
#This is an inline visual. It displays it after your code.
subplot(1,2,2)
imshow(im_slice)
#Rerun the code and try and access different slices.
#How do you think you could extract the number of slices in this file?
Explanation: <font color='brown'>Reading a multi-page tiff</font>
Exercise: Explore how to access different slices and find the dimensions.
End of explanation
#Make sure you have previously run %pylab inline at least once.
#from tifffile import imread as imreadtiff
#We import our flybrain.tif image file.
im = imreadtiff('flybrain.tif')
print('image dimensions',im.shape)
#This line converts our image object into a numpy array and then accesses the fifteenth slice.
im_slice = im[15,:,:]
#This activates a subplot which can be used to display more than one image in a grid.
subplot(2,2,1)
#Notice imshow can also show three channel images
#By default (RGB) if there are three channels.
#Note this doesn't work if there are two channels or more than three.
imshow(im_slice)
subplot(2,2,2)
#Plot the individual channels by specifying their index.
#Red channel.
imshow(im_slice[:,:,0],cmap="Greys_r")
subplot(2,2,3)
#Blue channel.
imshow(im_slice[:,:,1],cmap="Greys_r")
subplot(2,2,4)
#Green channel.
imshow(im_slice[:,:,2],cmap="Greys_r")
#Maximum projection.
#Take a look at this:
subplot(2,2,1)
imshow(np.average(im,0)[:,:,:])
subplot(2,2,2)
imshow(np.average(im,0)[:,:,0],cmap="Greys_r")
subplot(2,2,3)
imshow(np.average(im,0)[:,:,1],cmap="Greys_r")
#Can you work out what has happened.
#What happens when you use np.average instead?
#Can you work out why the average RGB image is so bad?
Explanation: <font color='brown'>Reading a multi-page tiff with multiple channels</font>
Exercise: Explore the multiple colour channels and how to visualise them.
End of explanation
#Make sure you have previously run %pylab inline at least once.
#from tifffile import imread as imreadtiff
im_stack = imreadtiff('mri-stack.tif')
im_slice = im_stack[5,:,:]
thr = 100;
print('image min: ',np.min(im_slice),'image max: ',np.max(im_slice), 'thr: ',thr)
#Here we can very easily apply a threshold to the image.
binary = im_slice>thr
#Now we show the binary mask.
subplot(1,2,1)
imshow(im_slice)
subplot(1,2,2)
imshow(binary)
#What happens when you change the direction of the sign from '>' to '<'.
#Hopefully the result makes sense.
Explanation: <font color='brown'>Applying a threshold to an image.</font>
Exercise: Apply a threshold to an image.
End of explanation |
11,835 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Exploring Quantopian
There is collaboration and a Python IDE directly on the site, which is how the user interacts with their data.
Both of us are brand new to the site, so we are exploring in order of the help file.
Quick summary of the API
Symbols
Step1: Machine Learning
Scikit Learn's home page divides up the space of machine learning well,
but the Mahout algorithms list has a more comprehensive list of algorithms.
From both
Step2: Clustering and Principal Components
We can use clustering and principal component analysis both to reduce the dimension of the problem.
- universal requirements and caveats
+ requirement
Step3: Perform Kmean clustering using scipy Kmeans
Step4: 8 weeks left; goals
blog style (so look into the free MongoDB account? Compose (formerly Mongo HQ) | MongoLabs)
Blog topics
MPT
zipline on some simple trading strategy
data exploration (maybe dropdown with of the plot we already did)
Belgian prof's strategy?
twitter sentiment analysis to stock picks?<br/>
(involves dropbox and reading external file on quantopian)
Machine learning
natural clustering vs NAICS (?)
some sort of classification 'buy' 'sell' 'ignore'(?) on stocks
MPT
The Modern Portfolio Theory code selects an 'efficient frontier' of the minimum risk for any desired return by using linear combinations of available stocks. Stocks should be grouped into categories if there is a risk of very high correlation between pairs of stocks to reduce the risk of inverting a singular matrix.
Portfolio return
The return of a portfolio is equal to the sum of the individual returns of the stocks multiplied by their percent allocations
Step5: Portfolio risk
The risk (variance) of a portfolio is the variance of a sum of random variables (the individual stock portfolios, multiplied by percent allocation). And the equation for variance of a sum of random variables is | Python Code:
import numpy as np
import pandas as pd
import pandas.io.data as web
import matplotlib.pyplot as plt
%matplotlib inline
import requests
import datetime
from database import Database
db = Database()
#list of fortune 500 companies ticke symbols
chicago_companies = ['ADM', 'BA', 'WGN', 'UAL', 'SHLD',
'MDLZ', 'ALL', 'MCD', 'EXC', 'ABT',
'ABBV', 'KRFT', 'ITW', 'NAV', 'CDW',
'RRD', 'GWW', 'DFS', 'DOV', 'MSI',
'TEN', 'INGR', 'NI', 'TEG', 'CF',
'ORI', 'USTR', 'LKQ'
]
url = "http://ichart.yahoo.com/table.csv?s={stock}&a={strt_month}&b={strt_day}&c={strt_year}&d={end_month}&e={end_day}&f={end_year}&g=w&ignore=.csv"
params = {"stock":'ADM', "strt_month": 1, 'strt_day':1, 'strt_year': 2010,
"end_month": 1, "end_day": 1, "end_year": 2014}
new_url = url.format(**params)
print url.format(**params)
data_new = web.DataReader(chicago_companies, 'yahoo', datetime.datetime(2010,1,1), datetime.datetime(2014,1,1))
#all Stock data is contained in 3D datastructure called a panel
data_new
ADM = pd.read_csv('data/table (4).csv')
ADM.head()
ADM = ADM.sort_index(by='Date')
ADM['Short'] = pd.rolling_mean(ADM['Close'], 4)
ADM['Long'] = pd.rolling_mean(ADM['Close'], 12)
ADM.plot(x='Date', y=['Close', 'Long', 'Short'])
buy_signals = np.where(pd.rolling_mean(ADM['Close'], 4) > pd.rolling_mean(ADM['Close'], 12), 1.0, 0.0)
Explanation: Exploring Quantopian
There is collaboration and a Python IDE directly on the site, which is how the user interacts with their data.
Both of us are brand new to the site, so we are exploring in order of the help file.
Quick summary of the API
Symbols:
symbol('goog') # single
symbols('goog', 'fb') # multiple
sid(24) # unique ID to Quantopian -- 24 is for aapl
Fundamentals:
Available for 8000 companies, with over 670 metrics
Accessed using get_fundamentals with the same syntax as SQLAlchemy;
returns a pandas dataframe
Not available during live trading, only in before_trading_start
(once per day) to be stored in the context and used in the function
handle_data
Ordering: market, limit, stop, stop limit
Scheduling: frequency in days, weeks, months,
plus order time of day in minutes
Allowed modules
Example algorithms
Actually...
The API is so thin -- it's really just about trading -- so instead explore machine
learning using existing data outside of the Quantopian environment. Also, they
provide zipline, their backtest functions, as open-source code
(github repo / pypi page) to test outside
of the quantopian environment. Here is a how-to
End of explanation
date_range = db.select_one("SELECT min(dt), max(dt) from return;", columns=("start", "end"))
chicago_companies = ['ADM', 'BA', 'MDLZ', 'ALL', 'MCD', 'EXC', 'ABT',
'ABBV', 'KRFT', 'ITW', 'GWW', 'DFS', 'DOV', 'MSI',
'NI', 'TEG', 'CF' ]
chicago_returns = db.select('SELECT dt, "{}" FROM return ORDER BY dt;'.format(
'", "'.join((c.lower() for c in chicago_companies))),
columns=["Date"] + chicago_companies)
chi_dates = [row.pop("Date") for row in chicago_returns]
chicago_returns = pd.DataFrame(chicago_returns, index=chi_dates)
sp500 = web.DataReader('^GSPC', 'yahoo', date_range['start'], date_range['end'])
sp500['sp500'] = sp500['Adj Close'].diff() / sp500['Adj Close']
treas90 = web.DataReader('^IRX', 'yahoo', date_range['start'], date_range['end'])
treas90['treas90'] = treas90['Adj Close'].diff() / treas90['Adj Close']
chicago_returns = chicago_returns.join(sp500['sp500'])
chicago_returns = chicago_returns.join(treas90['treas90'])
chicago_returns.drop(chicago_returns.head(1).index, inplace=True)
chicago_returns = chicago_returns.sub(chicago_returns['treas90'], axis=0)
chicago_returns.replace([np.inf, -np.inf], np.nan, inplace=True)
chicago_returns = chicago_returns / 100
# For ordinary least squares regression
from pandas.stats.api import ols
regressions = {}
for y in chicago_returns.columns:
if y not in ('sp500', 'treas90'):
df = chicago_returns.dropna(subset=[y, 'sp500'])[[y, 'sp500']]
regressions[y] = ols(y=df[y], x=df['sp500'])
regressions
symbols = sorted(regressions.keys())
data = []
for s in symbols:
data.append(dict(alpha=regressions[s].beta[1], beta=regressions[s].beta[0]))
betas = pd.DataFrame(data=data,index=symbols)
betas
betas.values[0,0]
fig = plt.figure()
fig.suptitle('Betas vs Alphas', fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
fig.subplots_adjust(top=0.85)
#ax.set_title('axes title')
ax.set_xlabel('Beta')
ax.set_ylabel('Alpha')
for i in range(len(betas.index)):
ax.text(betas.values[i,1], betas.values[i,0], betas.index[i]) #, bbox={'facecolor':'slateblue', 'alpha':0.5, 'pad':10})
ax.set_ylim([0,0.2])
ax.set_xlim([0.75, 1.2])
betas.describe()
help(ax.text)
Explanation: Machine Learning
Scikit Learn's home page divides up the space of machine learning well,
but the Mahout algorithms list has a more comprehensive list of algorithms.
From both:
- Collaborative filtering<br/>
'because you bought these, we recommend this'
- Classification<br/>
'people with these characteristics, if sent a mailer, will buy something 30% of the time'
- Clustering<br/>
'our customers naturally fall into these groups: urban singles, guys with dogs, women 25-35 who like rap'
- Dimension reduction<br/>
'a preprocessing step before regression that can also identify the most significant contributors to variation'
- Topics<br/>
'the posts in this user group are related to either local politics, music, or sports'
The S&P 500 dataset is great for us to quickly explore regression, clustering, and principal component analysis.
We can also backtest here using Quantopian's zipline library.
Regression
We can calculate our own 'Beta' by pulling the S&P 500 index values from Yahoo using Pandas and then regressing each of our components of the S&P500 with it. The NASDAQ defines Beta as the coefficient found from regressing the individual stock's returns in excess of the 90-day treasury rate on the S&P 500's returns in excess of the 90-day rate. Nasdaq cautions that Beta can change over time and that it could be different for positive and negative changes.
Look at the Chicago companies for fun (database select from our dataset)
Get the S&P 500 ('^GSPC') from Yahoo Finance
Get the 90-day treasury bill rates ('^IRX') from Yahoo Finance
The equation for the regression will be:
(Return - Treas90) = Beta * (SP500 - Treas90) + alpha
End of explanation
import scipy.spatial.distance as dist
import scipy.cluster.hierarchy as hclust
chicago_returns
# chicago_returns.dropna(subset=[y, 'sp500'])[[y, 'sp500']]
# Spatial distance: scipy.spatial.distance.pdist(X)
# http://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.pdist.html#scipy.spatial.distance.pdist
# Perform the hierarchical clustering: scipy.cluster.hierarchy.linkage(distance_matrix)
# https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.cluster.hierarchy.linkage.html#scipy.cluster.hierarchy.linkage
# Plot the dendrogram: den = hclust.dendrogram(links,labels=chicago_dist.columns) #, orientation="left")
#
#
chicago_dist = dist.pdist(chicago_returns.dropna().transpose(), 'euclidean')
links = hclust.linkage(chicago_dist)
chicago_dist
plt.figure(figsize=(5,10))
#data_link = hclust.linkage(chicago_dist, method='single', metric='euclidean')
den = hclust.dendrogram(links,labels=chicago_returns.columns, orientation="left")
plt.ylabel('Samples', fontsize=9)
plt.xlabel('Distance')
plt.suptitle('Stocks clustered by similarity', fontweight='bold', fontsize=14);
with open("sp500_columns.txt") as infile:
sp_companies = infile.read().strip().split("\n")
returns = db.select( ('SELECT dt, "{}" FROM return '
'WHERE dt BETWEEN \'2012-01-01\' AND \'2012-12-31\''
'ORDER BY dt;').format(
'", "'.join((c.lower() for c in sp_companies))),
columns=["Date"] + sp_companies)
sp_dates = [row.pop("Date") for row in returns]
returns = pd.DataFrame(returns, index=sp_dates)
#Calculate distance and cluster
sp_dist = dist.pdist(returns.dropna().transpose(), 'euclidean')
sp_links = hclust.linkage(sp_dist, method='single', metric='euclidean')
plt.figure(figsize=(10,180))
#data_link = hclust.linkage(chicago_dist, method='single', metric='euclidean')
den = hclust.dendrogram(sp_links,labels=returns.columns, orientation="left")
plt.ylabel('Samples', fontsize=9)
plt.xlabel('Distance')
plt.suptitle('Stocks clustered by similarity', fontweight='bold', fontsize=14)
Explanation: Clustering and Principal Components
We can use clustering and principal component analysis both to reduce the dimension of the problem.
- universal requirements and caveats
+ requirement: all variables are numeric, and not missing. Can blow out categories to be '1' or '0' for each one.
+ caveat: normalize data (e.g. all 'inches' not some 'feet') to avoid misweighting variables with large values
- hierarchical clustering<br/>
+ algorithm: at each step join the two elements with the shortest distance between them, until there is only one element
+ when: data exploration and as a reality check to kmeans; this is harder to apply to new observations not in the
original dataset but can give a reality check to identify
- kmeans clustering<br/>
+ algorithm: randomly generate 'k' centers, then move them around until the total distance of every point to its
nearest center is minimized
+ when: if you want an easily explainable way to group observations. Clusters can also then become
inputs to a regression
+ caveat: seed the training with specific points if you want repeatable results
- principal component analysis<br/>
+ algorithm: consider columns to be axes, and rotate these axes so that the first component is along the direction
of the highest variance, the second along the direction of the next highest variance, etc.
+ when: extremely large numbers of dimensions make all distances very large and reduce the usefulness of
the clustering method, but PCA is still good. I and senior colleagues have done clustering with up to 1000 columns,
so I mean extremely large numbers of dimensions. PCA is harder to explain to people, but is good for putting back
into a regression. If you can just say you used PCA to identify the 5 most important components to use in a regression
without having to explain what PCA is, that's good.
End of explanation
returns.transpose().dropna().shape
from scipy.cluster.vq import whiten
from sklearn.cluster import KMeans
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(returns.transpose().dropna())
princ_comp_returns = pca.transform(returns.transpose().dropna())
princ_comp_returns.shape
normalize = whiten(returns.transpose().dropna())
km = KMeans(n_clusters = 2)
km.fit(normalize)
centroids = km.cluster_centers_
#idx = vq(normalize, centriods)
pca = PCA(n_components=2)
pca.fit(returns.transpose().dropna())
princ_comp = pca.transform(centroids)
princ_comp
colors = km.labels_
plt.scatter(princ_comp_returns[:,0], princ_comp_returns[:, 1], c= colors)
Explanation: Perform Kmean clustering using scipy Kmeans
End of explanation
%%latex
\begin{aligned}
r_{portfolio} =
\left(\begin{matrix}p_1 & p_2 & \ldots & p_n \end{matrix}\right) \, \cdot \,
\left(\begin{matrix}r_1\\ r_2 \\ \vdots \\ r_n \end{matrix}\right)
\end{aligned}
Explanation: 8 weeks left; goals
blog style (so look into the free MongoDB account? Compose (formerly Mongo HQ) | MongoLabs)
Blog topics
MPT
zipline on some simple trading strategy
data exploration (maybe dropdown with of the plot we already did)
Belgian prof's strategy?
twitter sentiment analysis to stock picks?<br/>
(involves dropbox and reading external file on quantopian)
Machine learning
natural clustering vs NAICS (?)
some sort of classification 'buy' 'sell' 'ignore'(?) on stocks
MPT
The Modern Portfolio Theory code selects an 'efficient frontier' of the minimum risk for any desired return by using linear combinations of available stocks. Stocks should be grouped into categories if there is a risk of very high correlation between pairs of stocks to reduce the risk of inverting a singular matrix.
Portfolio return
The return of a portfolio is equal to the sum of the individual returns of the stocks multiplied by their percent allocations:
End of explanation
%%latex
\begin{aligned}
sd( x_1 + x_2) = sd(x_1) + sd(x_2) - 2 \, \cdot \, cov(x_1, x_2) \, \cdot \, sd(x_1) \, \cdot \, sd(x_2)
\\[0.25in]
sd( portfolio ) =
\left(\begin{matrix}p_1 & p_2 & \ldots & p_n \end{matrix}\right) \, \cdot \,
\left(\begin{matrix}cov(x_1,x_1) & cov(x_1, x_2) &\ldots & cov(x_1, x_n)\\
cov(x_2, x1) & cov(x_2, x_2) &\ldots & cov(x_2, x_n) \\
\vdots & \vdots & \ddots & \vdots \\
cov(x_n, x_1)& cov(x_n, x_2) & \ldots & cov(x_n,x_n) \end{matrix}\right)
\left(\begin{matrix}p_1 \\ p_2 \\ \vdots \\ p_n \end{matrix}\right)
\end{aligned}
import mpt
date_range = {"start": datetime.date(2009,1,1), "end": datetime.date(2012,12,31)}
chicago_companies = ['ADM', 'BA', 'MDLZ', 'ALL', 'MCD', 'EXC', 'ABT',
'KRFT', 'ITW', 'GWW', 'DFS', 'DOV', 'MSI',
'NI', 'TEG', 'CF' ]
chicago_returns = db.select(('SELECT dt, "{}" FROM return'
' WHERE dt BETWEEN ''%s'' AND ''%s'''
' ORDER BY dt;').format(
'", "'.join((c.lower() for c in chicago_companies))),
columns=["Date"] + chicago_companies,
args=[date_range['start'], date_range['end']])
# At this point chicago_returns is an array of dictionaries
# [{"Date":datetime.date(2009,1,1), "ADM":0.1, "BA":0.2, etc ... , "CF": 0.1},
# {"Date":datetime.date(2009,1,2), "ADM":0.2, "BA":0.1, etc ..., "CF":0.1},
# ...,
# {"Date":datetime.date(2012,12,31), "ADM":0.2, "BA":0.1, etc ..., "CF":0.1}]
# Pull out the dates to mak them the indices in the Pandas DataFrame
chi_dates = [row.pop("Date") for row in chicago_returns]
chicago_returns = pd.DataFrame(chicago_returns, index=chi_dates)
chicago_returns = chicago_returns / 100
treas90 = web.DataReader('^IRX', 'yahoo', date_range['start'], date_range['end'])
treas90['treas90'] = treas90['Adj Close'].diff() / treas90['Adj Close']
chicago_returns = chicago_returns.join(treas90['treas90'])
chicago_returns.drop(chicago_returns.head(1).index, inplace=True)
chicago_returns.replace([np.inf, -np.inf], np.nan, inplace=True)
result = mpt.get_efficient_frontier(chicago_returns)
chicago_returns.std()
fig = plt.figure()
fig.suptitle("MPT Efficient Frontier", fontsize=14, fontweight='bold')
ax = fig.add_subplot(111)
fig.subplots_adjust(top=0.85)
ax.set_xlabel('Risk')
ax.set_ylabel('Target return (annual)')
sds = list(chicago_returns.std()*np.sqrt(250))
mus = list(chicago_returns.mean()*250)
syms = list(chicago_returns.columns)
#for i in range(len(sds)):
# plt.text(sds[i], mus[i], syms[i], bbox={'facecolor':'slateblue', 'alpha':0.5, 'pad':10})
ax.plot(result["risks"], result["returns"], linewidth=2)
ax.scatter(sds, mus, alpha=0.5)
Explanation: Portfolio risk
The risk (variance) of a portfolio is the variance of a sum of random variables (the individual stock portfolios, multiplied by percent allocation). And the equation for variance of a sum of random variables is:
End of explanation |
11,836 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Organize recording files by site
This notebook copies sound files (recordings) into the pumilio directory structure based on a csv file containing information about the time and location of recording.
Required packages
<a href="https
Step1: Import statements
Step2: Organize recordings | Python Code:
csv_filepath = ""
sound_directory = ""
source_directory = os.path.dirname(os.path.dirname(working_directory))
Explanation: Organize recording files by site
This notebook copies sound files (recordings) into the pumilio directory structure based on a csv file containing information about the time and location of recording.
Required packages
<a href="https://github.com/pydata/pandas">pandas</a> <br />
Variable declarations
csv_filepath – path to a csv file containing information about the time and location of each recording <br />
sound_directory – path to the directory that will contain the recordings <br />
source_directory – path to the directory containing the unorganized recordings
End of explanation
import pandas
import os.path
from datetime import datetime
import subprocess
Explanation: Import statements
End of explanation
visits = pandas.read_csv(csv_filepath)
visits = visits.sort_values(by=['Time'])
visits['ID'] = visits['ID'].map('{:g}'.format)
sites = visits['ID'].drop_duplicates().dropna().as_matrix()
for site in sites:
path = os.path.join(working_directory, str(site))
if os.path.exists(path):
os.rmdir(path)
os.mkdir(path)
for index, visit in visits.iterrows():
try:
dt = datetime.strptime(visit['Time'], '%Y-%m-%d %X')
except TypeError:
print('Time was not a string, but had a value of: "{0}"'.format(visit['Time']))
continue
source_file = os.path.join(source_directory, dt.strftime('%Y-%m-%d'), 'converted', '{0}.flac'.format(dt.strftime('%y%m%d-%H%M%S')))
destination_file = os.path.join(working_directory, str(visit['ID']), '{0}.flac'.format(dt.strftime('%y%m%d-%H%M%S')))
if os.path.exists(source_file):
subprocess.check_output(["cp", source_file, destination_file])
print('copying {0}'.format(dt.strftime('%y%m%d-%H%M%S')))
else:
print('\n')
print('{0} does not exist!'.format(dt.strftime('%y%m%d-%H%M%S')))
print(visit['Name'])
print('\n')
print('\n')
print('done')
Explanation: Organize recordings
End of explanation |
11,837 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 모델 저장과 복원
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 예제 데이터셋 받기
MNIST 데이터셋으로 모델을 훈련하여 가중치를 저장하는 예제를 만들어 보겠습니다. 모델 실행 속도를 빠르게 하기 위해 샘플에서 처음 1,000개만 사용겠습니다
Step3: 모델 정의
먼저 간단한 모델을 하나 만들어 보죠.
Step4: 훈련하는 동안 체크포인트 저장하기
훈련된 모델을 다시 훈련할 필요 없이 사용하거나 훈련 과정이 중단된 경우 중단한 부분에서 훈련을 다시 시작할 수 있습니다. tf.keras.callbacks.ModelCheckpoint 콜백을 사용하면 훈련 도중 또는 훈련 종료 시 모델을 지속적으로 저장할 수 있습니다.
체크포인트 콜백 사용하기
훈련하는 동안 가중치를 저장하기 위해 ModelCheckpoint 콜백을 만들어 보죠
Step5: 이 코드는 텐서플로 체크포인트 파일을 만들고 에포크가 종료될 때마다 업데이트합니다
Step6: 두 모델이 동일한 아키텍처를 공유하기만 한다면 두 모델 간에 가중치를 공유할 수 있습니다. 따라서 가중치 전용에서 모델을 복원할 때 원래 모델과 동일한 아키텍처로 모델을 만든 다음 가중치를 설정합니다.
이제 훈련되지 않은 새로운 모델을 다시 빌드하고 테스트 세트에서 평가합니다. 훈련되지 않은 모델은 확률 수준(~10% 정확도)에서 수행됩니다.
Step7: 체크포인트에서 가중치를 로드하고 다시 평가해 보죠
Step8: 체크포인트 콜백 매개변수
이 콜백 함수는 몇 가지 매개변수를 제공합니다. 체크포인트 이름을 고유하게 만들거나 체크포인트 주기를 조정할 수 있습니다.
새로운 모델을 훈련하고 다섯 번의 에포크마다 고유한 이름으로 체크포인트를 저장해 보겠습니다
Step9: 만들어진 체크포인트를 확인해 보고 마지막 체크포인트를 선택해 보겠습니다
Step10: 참고
Step11: 이 파일들은 무엇인가요?
위의 코드는 이진 형식의 훈련된 가중치만 포함하는 체크포인트 형식의 파일 모음에 가중치를 저장합니다. 체크포인트에는 다음이 포함됩니다.
모델의 가중치를 포함하는 하나 이상의 샤드
어떤 가중치가 어떤 샤드에 저장되어 있는지 나타내는 인덱스 파일
단일 머신에서 모델을 훈련하는 경우 접미사가 .data-00000-of-00001인 하나의 샤드를 갖게 됩니다.
수동으로 가중치 저장하기
Model.save_weights 메서드를 사용하여 수동으로 가중치를 저장합니다. 기본적으로 tf.keras, 특히 save_weights는 .ckpt 확장자가 있는 TensorFlow 체크포인트 형식을 사용합니다(.h5 확장자를 사용하여 HDF5에 저장하는 내용은 모델 저장 및 직렬화 가이드에서 다룸).
Step12: 전체 모델 저장하기
model.save 메서드를 호출하여 모델의 구조, 가중치, 훈련 설정을 하나의 파일/폴더에 저장합니다. 모델을 저장하기 때문에 원본 파이썬 코드*가 없어도 사용할 수 있습니다. 옵티마이저 상태가 복원되므로 정확히 중지한 시점에서 다시 훈련을 시작할 수 있습니다.
전체 모델은 두 가지 다른 파일 형식(SavedModel 및 HDF5)으로 저장할 수 있습니다. TensorFlow SavedModel 형식은 TF2.x의 기본 파일 형식입니다. 그러나 모델을 HDF5 형식으로 저장할 수 있습니다. 전체 모델을 두 가지 파일 형식으로 저장하는 방법에 대한 자세한 내용은 아래에 설명되어 있습니다.
전체 모델을 저장하는 기능은 매우 유용합니다. TensorFlow.js로 모델을 로드한 다음 웹 브라우저에서 모델을 훈련하고 실행할 수 있습니다(Saved Model, HDF5). 또는 모바일 장치에 맞도록 변환한 다음 TensorFlow Lite를 사용하여 실행할 수 있습니다(Saved Model, HDF5).
사용자 정의 객체(예를 들면 상속으로 만든 클래스나 층)는 저장하고 로드하는데 특별한 주의가 필요합니다. 아래 사용자 정의 객체 저장하기 섹션을 참고하세요.
SavedModel 포맷
SavedModel 형식은 모델을 직렬화하는 또 다른 방법입니다. 이 형식으로 저장된 모델은 tf.keras.models.load_model을 사용하여 복원할 수 있으며 TensorFlow Serving과 호환됩니다. SavedModel 가이드에 SavedModel을 제공/검사하는 방법이 자세히 설명되어 있습니다. 아래 섹션은 모델을 저장하고 복원하는 단계를 보여줍니다.
Step13: SavedModel 형식은 protobuf 바이너리와 TensorFlow 체크포인트를 포함하는 디렉토리입니다. 저장된 모델 디렉토리를 검사합니다.
Step14: 저장된 모델로부터 새로운 케라스 모델을 로드합니다
Step15: 복원된 모델은 원본 모델과 동일한 매개변수로 컴파일되어 있습니다. 이 모델을 평가하고 예측에 사용해 보죠
Step16: HDF5 파일로 저장하기
케라스는 HDF5 표준을 따르는 기본 저장 포맷을 제공합니다.
Step17: 이제 이 파일로부터 모델을 다시 만들어 보죠
Step18: 정확도를 확인해 보겠습니다 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
!pip install pyyaml h5py # HDF5 포맷으로 모델을 저장하기 위해서 필요합니다
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
Explanation: 모델 저장과 복원
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/keras/save_and_load"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a> </td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/tutorials/keras/save_and_load.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
모델 진행 상황은 훈련 중 및 훈련 후에 저장할 수 있습니다. 즉, 모델이 중단된 위치에서 다시 시작하고 긴 훈련 시간을 피할 수 있습니다. 저장은 또한 모델을 공유할 수 있고 다른 사람들이 작업을 다시 만들 수 있음을 의미합니다. 연구 모델 및 기술을 게시할 때 대부분의 머신러닝 실무자는 다음을 공유합니다.
모델을 만드는 코드
모델의 훈련된 가중치 또는 파라미터
이런 데이터를 공유하면 다른 사람들이 모델의 작동 방식을 이해하고 새로운 데이터로 모델을 실험하는데 도움이 됩니다.
주의: TensorFlow 모델은 코드이며 신뢰할 수 없는 코드에 주의하는 것이 중요합니다. 자세한 내용은 TensorFlow 안전하게 사용하기를 참조하세요.
저장 방식
사용 중인 API에 따라 TensorFlow 모델을 저장하는 다양한 방법이 있습니다. 이 가이드에서는 TensorFlow에서 모델을 빌드하고 훈련하기 위해 고수준 API인 tf.keras를 사용합니다. 다른 접근 방식에 대해서는 TensorFlow 저장 및 복원 가이드 또는 즉시 실행 저장을 참조하세요.
설정
설치와 임포트
필요한 라이브러리를 설치하고 텐서플로를 임포트(import)합니다:
End of explanation
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
train_labels = train_labels[:1000]
test_labels = test_labels[:1000]
train_images = train_images[:1000].reshape(-1, 28 * 28) / 255.0
test_images = test_images[:1000].reshape(-1, 28 * 28) / 255.0
Explanation: 예제 데이터셋 받기
MNIST 데이터셋으로 모델을 훈련하여 가중치를 저장하는 예제를 만들어 보겠습니다. 모델 실행 속도를 빠르게 하기 위해 샘플에서 처음 1,000개만 사용겠습니다:
End of explanation
# 간단한 Sequential 모델을 정의합니다
def create_model():
model = tf.keras.models.Sequential([
keras.layers.Dense(512, activation='relu', input_shape=(784,)),
keras.layers.Dropout(0.2),
keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
# 모델 객체를 만듭니다
model = create_model()
# 모델 구조를 출력합니다
model.summary()
Explanation: 모델 정의
먼저 간단한 모델을 하나 만들어 보죠.
End of explanation
checkpoint_path = "training_1/cp.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# 모델의 가중치를 저장하는 콜백 만들기
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
# 새로운 콜백으로 모델 훈련하기
model.fit(train_images,
train_labels,
epochs=10,
validation_data=(test_images,test_labels),
callbacks=[cp_callback]) # 콜백을 훈련에 전달합니다
# 옵티마이저의 상태를 저장하는 것과 관련되어 경고가 발생할 수 있습니다.
# 이 경고는 (그리고 이 노트북의 다른 비슷한 경고는) 이전 사용 방식을 권장하지 않기 위함이며 무시해도 좋습니다.
Explanation: 훈련하는 동안 체크포인트 저장하기
훈련된 모델을 다시 훈련할 필요 없이 사용하거나 훈련 과정이 중단된 경우 중단한 부분에서 훈련을 다시 시작할 수 있습니다. tf.keras.callbacks.ModelCheckpoint 콜백을 사용하면 훈련 도중 또는 훈련 종료 시 모델을 지속적으로 저장할 수 있습니다.
체크포인트 콜백 사용하기
훈련하는 동안 가중치를 저장하기 위해 ModelCheckpoint 콜백을 만들어 보죠:
End of explanation
os.listdir(checkpoint_dir)
Explanation: 이 코드는 텐서플로 체크포인트 파일을 만들고 에포크가 종료될 때마다 업데이트합니다:
End of explanation
# 기본 모델 객체를 만듭니다
model = create_model()
# 모델을 평가합니다
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("훈련되지 않은 모델의 정확도: {:5.2f}%".format(100*acc))
Explanation: 두 모델이 동일한 아키텍처를 공유하기만 한다면 두 모델 간에 가중치를 공유할 수 있습니다. 따라서 가중치 전용에서 모델을 복원할 때 원래 모델과 동일한 아키텍처로 모델을 만든 다음 가중치를 설정합니다.
이제 훈련되지 않은 새로운 모델을 다시 빌드하고 테스트 세트에서 평가합니다. 훈련되지 않은 모델은 확률 수준(~10% 정확도)에서 수행됩니다.
End of explanation
# 가중치 로드
model.load_weights(checkpoint_path)
# 모델 재평가
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))
Explanation: 체크포인트에서 가중치를 로드하고 다시 평가해 보죠:
End of explanation
# 파일 이름에 에포크 번호를 포함시킵니다(`str.format` 포맷)
checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
checkpoint_dir = os.path.dirname(checkpoint_path)
# 다섯 번째 에포크마다 가중치를 저장하기 위한 콜백을 만듭니다
cp_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=checkpoint_path,
verbose=1,
save_weights_only=True,
period=5)
# 새로운 모델 객체를 만듭니다
model = create_model()
# `checkpoint_path` 포맷을 사용하는 가중치를 저장합니다
model.save_weights(checkpoint_path.format(epoch=0))
# 새로운 콜백을 사용하여 모델을 훈련합니다
model.fit(train_images,
train_labels,
epochs=50,
callbacks=[cp_callback],
validation_data=(test_images,test_labels),
verbose=0)
Explanation: 체크포인트 콜백 매개변수
이 콜백 함수는 몇 가지 매개변수를 제공합니다. 체크포인트 이름을 고유하게 만들거나 체크포인트 주기를 조정할 수 있습니다.
새로운 모델을 훈련하고 다섯 번의 에포크마다 고유한 이름으로 체크포인트를 저장해 보겠습니다:
End of explanation
os.listdir(checkpoint_dir)
latest = tf.train.latest_checkpoint(checkpoint_dir)
latest
Explanation: 만들어진 체크포인트를 확인해 보고 마지막 체크포인트를 선택해 보겠습니다:
End of explanation
# 새로운 모델 객체를 만듭니다
model = create_model()
# 이전에 저장한 가중치를 로드합니다
model.load_weights(latest)
# 모델을 재평가합니다
loss, acc = model.evaluate(test_images, test_labels, verbose=2)
print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))
Explanation: 참고: 기본 TensorFlow 형식은 가장 최근의 체크포인트 5개만 저장합니다.
모델을 초기화하고 최근 체크포인트를 로드하여 테스트해 보겠습니다:
End of explanation
# 가중치를 저장합니다
model.save_weights('./checkpoints/my_checkpoint')
# 새로운 모델 객체를 만듭니다
model = create_model()
# 가중치를 복원합니다
model.load_weights('./checkpoints/my_checkpoint')
# 모델을 평가합니다
loss,acc = model.evaluate(test_images, test_labels, verbose=2)
print("복원된 모델의 정확도: {:5.2f}%".format(100*acc))
Explanation: 이 파일들은 무엇인가요?
위의 코드는 이진 형식의 훈련된 가중치만 포함하는 체크포인트 형식의 파일 모음에 가중치를 저장합니다. 체크포인트에는 다음이 포함됩니다.
모델의 가중치를 포함하는 하나 이상의 샤드
어떤 가중치가 어떤 샤드에 저장되어 있는지 나타내는 인덱스 파일
단일 머신에서 모델을 훈련하는 경우 접미사가 .data-00000-of-00001인 하나의 샤드를 갖게 됩니다.
수동으로 가중치 저장하기
Model.save_weights 메서드를 사용하여 수동으로 가중치를 저장합니다. 기본적으로 tf.keras, 특히 save_weights는 .ckpt 확장자가 있는 TensorFlow 체크포인트 형식을 사용합니다(.h5 확장자를 사용하여 HDF5에 저장하는 내용은 모델 저장 및 직렬화 가이드에서 다룸).
End of explanation
# 새로운 모델 객체를 만들고 훈련합니다
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# SavedModel로 전체 모델을 저장합니다
!mkdir -p saved_model
model.save('saved_model/my_model')
Explanation: 전체 모델 저장하기
model.save 메서드를 호출하여 모델의 구조, 가중치, 훈련 설정을 하나의 파일/폴더에 저장합니다. 모델을 저장하기 때문에 원본 파이썬 코드*가 없어도 사용할 수 있습니다. 옵티마이저 상태가 복원되므로 정확히 중지한 시점에서 다시 훈련을 시작할 수 있습니다.
전체 모델은 두 가지 다른 파일 형식(SavedModel 및 HDF5)으로 저장할 수 있습니다. TensorFlow SavedModel 형식은 TF2.x의 기본 파일 형식입니다. 그러나 모델을 HDF5 형식으로 저장할 수 있습니다. 전체 모델을 두 가지 파일 형식으로 저장하는 방법에 대한 자세한 내용은 아래에 설명되어 있습니다.
전체 모델을 저장하는 기능은 매우 유용합니다. TensorFlow.js로 모델을 로드한 다음 웹 브라우저에서 모델을 훈련하고 실행할 수 있습니다(Saved Model, HDF5). 또는 모바일 장치에 맞도록 변환한 다음 TensorFlow Lite를 사용하여 실행할 수 있습니다(Saved Model, HDF5).
사용자 정의 객체(예를 들면 상속으로 만든 클래스나 층)는 저장하고 로드하는데 특별한 주의가 필요합니다. 아래 사용자 정의 객체 저장하기 섹션을 참고하세요.
SavedModel 포맷
SavedModel 형식은 모델을 직렬화하는 또 다른 방법입니다. 이 형식으로 저장된 모델은 tf.keras.models.load_model을 사용하여 복원할 수 있으며 TensorFlow Serving과 호환됩니다. SavedModel 가이드에 SavedModel을 제공/검사하는 방법이 자세히 설명되어 있습니다. 아래 섹션은 모델을 저장하고 복원하는 단계를 보여줍니다.
End of explanation
# my_model 디렉토리
!ls saved_model
# assests 폴더, saved_model.pb, variables 폴더
!ls saved_model/my_model
Explanation: SavedModel 형식은 protobuf 바이너리와 TensorFlow 체크포인트를 포함하는 디렉토리입니다. 저장된 모델 디렉토리를 검사합니다.
End of explanation
new_model = tf.keras.models.load_model('saved_model/my_model')
# 모델 구조를 확인합니다
new_model.summary()
Explanation: 저장된 모델로부터 새로운 케라스 모델을 로드합니다:
End of explanation
# 복원된 모델을 평가합니다
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('복원된 모델의 정확도: {:5.2f}%'.format(100*acc))
print(new_model.predict(test_images).shape)
Explanation: 복원된 모델은 원본 모델과 동일한 매개변수로 컴파일되어 있습니다. 이 모델을 평가하고 예측에 사용해 보죠:
End of explanation
# 새로운 모델 객체를 만들고 훈련합니다
model = create_model()
model.fit(train_images, train_labels, epochs=5)
# 전체 모델을 HDF5 파일로 저장합니다
# '.h5' 확장자는 이 모델이 HDF5로 저장되었다는 것을 나타냅니다
model.save('my_model.h5')
Explanation: HDF5 파일로 저장하기
케라스는 HDF5 표준을 따르는 기본 저장 포맷을 제공합니다.
End of explanation
# 가중치와 옵티마이저를 포함하여 정확히 동일한 모델을 다시 생성합니다
new_model = tf.keras.models.load_model('my_model.h5')
# 모델 구조를 출력합니다
new_model.summary()
Explanation: 이제 이 파일로부터 모델을 다시 만들어 보죠:
End of explanation
loss, acc = new_model.evaluate(test_images, test_labels, verbose=2)
print('복원된 모델의 정확도: {:5.2f}%'.format(100*acc))
Explanation: 정확도를 확인해 보겠습니다:
End of explanation |
11,838 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<!--NAVIGATION-->
< Account Information | Contents | Trade Management >
Order Management
Orders
Creating Orders
create_order(self, account_id, **params)
Step1: For a detailed explanation of the above, please refer to Rates Information.
Step2: Getting Open Orders
get_orders(self, account_id, **params)
Step3: Getting Specific Order Information
get_order(self, account_id, order_id, **params)
Step4: Modify Order
modify_order(self, account_id, order_id, **params)
Step5: Close Order
close_order(self, account_id, order_id, **params)
Step6: Now when we check the orders. The above order has been closed and removed without being filled. There is only one outstanding order now. | Python Code:
from datetime import datetime, timedelta
import pandas as pd
import oandapy
import configparser
config = configparser.ConfigParser()
config.read('../config/config_v1.ini')
account_id = config['oanda']['account_id']
api_key = config['oanda']['api_key']
oanda = oandapy.API(environment="practice",
access_token=api_key)
trade_expire = datetime.now() + timedelta(days=1)
trade_expire = trade_expire.isoformat("T") + "Z"
trade_expire
Explanation: <!--NAVIGATION-->
< Account Information | Contents | Trade Management >
Order Management
Orders
Creating Orders
create_order(self, account_id, **params)
End of explanation
response = oanda.create_order(account_id,
instrument = "AUD_USD",
units=1000,
side="buy",
type="limit",
price=0.7420,
expiry=trade_expire)
print(response)
pd.Series(response["orderOpened"])
order_id = response["orderOpened"]['id']
Explanation: For a detailed explanation of the above, please refer to Rates Information.
End of explanation
response = oanda.get_orders(account_id)
print(response)
pd.DataFrame(response['orders'])
Explanation: Getting Open Orders
get_orders(self, account_id, **params)
End of explanation
response = oanda.get_orders(account_id)
id = response['orders'][0]['id']
oanda.get_order(account_id, order_id=id)
Explanation: Getting Specific Order Information
get_order(self, account_id, order_id, **params)
End of explanation
response = oanda.get_orders(account_id)
id = response['orders'][0]['id']
oanda.modify_order(account_id, order_id=id, price=0.7040)
Explanation: Modify Order
modify_order(self, account_id, order_id, **params)
End of explanation
response = oanda.get_orders(account_id)
id = response['orders'][0]['id']
oanda.close_order(account_id, order_id=id)
Explanation: Close Order
close_order(self, account_id, order_id, **params)
End of explanation
oanda.get_orders(account_id)
Explanation: Now when we check the orders. The above order has been closed and removed without being filled. There is only one outstanding order now.
End of explanation |
11,839 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Assignment
Step1: General information on the Gapminder data
Step2: But the unemployment rate is not provided directly. In the database, the employment rate (% of the popluation) is available. So the unemployement rate will be computed as 100 - employment rate
Step3: The first records of the data restricted to the three analyzed variables are
Step4: Data analysis
We will now have a look at the frequencies of the variables after grouping them as all three are continuous variables. I will group the data in intervals using the cut function.
Internet use rate frequencies
Step5: Suicide per 100,000 people frequencies
Step6: Unemployment rate frequencies | Python Code:
# Load a useful Python libraries for handling data
import pandas as pd
import numpy as np
from IPython.display import Markdown, display
# Read the data
data_filename = r'gapminder.csv'
data = pd.read_csv(data_filename, low_memory=False)
data = data.set_index('country')
Explanation: Assignment: Making Data Management Decisions - Python
Following is the Python program I wrote to fulfill the third assignment of the Data Management and Visualization online course.
I decided to use Jupyter Notebook as it is a pretty way to write code and present results.
Research question
Using the Gapminder database, I would like to see if an increasing Internet usage results in an increasing suicide rate. A study shows that other factors like unemployment could have a great impact.
So for this third assignment, the three following variables will be analyzed:
Internet Usage Rate (per 100 people)
Suicide Rate (per 100 000 people)
Unemployment Rate (% of the population of age 15+)
Data management
For the question, I'm interested in the countries for which data are missing will be discarded. As missing data in Gapminder database are replace directly by NaN no special data treatment is needed.
End of explanation
display(Markdown("Number of countries: {}".format(len(data))))
display(Markdown("Number of variables: {}".format(len(data.columns))))
# Convert interesting variables in numeric format
for variable in ('internetuserate', 'suicideper100th', 'employrate'):
data[variable] = pd.to_numeric(data[variable], errors='coerce')
Explanation: General information on the Gapminder data
End of explanation
data['unemployrate'] = 100. - data['employrate']
Explanation: But the unemployment rate is not provided directly. In the database, the employment rate (% of the popluation) is available. So the unemployement rate will be computed as 100 - employment rate:
End of explanation
subdata = data[['internetuserate', 'suicideper100th', 'unemployrate']]
subdata.head(10)
Explanation: The first records of the data restricted to the three analyzed variables are:
End of explanation
display(Markdown("Internet Use Rate (min, max) = ({0:.2f}, {1:.2f})".format(subdata['internetuserate'].min(), subdata['internetuserate'].max())))
internetuserate_bins = pd.cut(subdata['internetuserate'],
bins=np.linspace(0, 100., num=21))
counts1 = internetuserate_bins.value_counts(sort=False, dropna=False)
percentage1 = internetuserate_bins.value_counts(sort=False, normalize=True, dropna=False)
data_struct = {
'Counts' : counts1,
'Cumulative counts' : counts1.cumsum(),
'Percentages' : percentage1,
'Cumulative percentages' : percentage1.cumsum()
}
internetrate_summary = pd.DataFrame(data_struct)
internetrate_summary.index.name = 'Internet use rate (per 100 people)'
(internetrate_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]
.style.set_precision(3)
.set_properties(**{'text-align':'right'}))
Explanation: Data analysis
We will now have a look at the frequencies of the variables after grouping them as all three are continuous variables. I will group the data in intervals using the cut function.
Internet use rate frequencies
End of explanation
display(Markdown("Suicide per 100,000 people (min, max) = ({:.2f}, {:.2f})".format(subdata['suicideper100th'].min(), subdata['suicideper100th'].max())))
suiciderate_bins = pd.cut(subdata['suicideper100th'],
bins=np.linspace(0, 40., num=21))
counts2 = suiciderate_bins.value_counts(sort=False, dropna=False)
percentage2 = suiciderate_bins.value_counts(sort=False, normalize=True, dropna=False)
data_struct = {
'Counts' : counts2,
'Cumulative counts' : counts2.cumsum(),
'Percentages' : percentage2,
'Cumulative percentages' : percentage2.cumsum()
}
suiciderate_summary = pd.DataFrame(data_struct)
suiciderate_summary.index.name = 'Suicide (per 100 000 people)'
(suiciderate_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]
.style.set_precision(3)
.set_properties(**{'text-align':'right'}))
Explanation: Suicide per 100,000 people frequencies
End of explanation
display(Markdown("Unemployment rate (min, max) = ({0:.2f}, {1:.2f})".format(subdata['unemployrate'].min(), subdata['unemployrate'].max())))
unemployment_bins = pd.cut(subdata['unemployrate'],
bins=np.linspace(0, 100., num=21))
counts3 = unemployment_bins.value_counts(sort=False, dropna=False)
percentage3 = unemployment_bins.value_counts(sort=False, normalize=True, dropna=False)
data_struct = {
'Counts' : counts3,
'Cumulative counts' : counts3.cumsum(),
'Percentages' : percentage3,
'Cumulative percentages' : percentage3.cumsum()
}
unemployment_summary = pd.DataFrame(data_struct)
unemployment_summary.index.name = 'Unemployement rate (% population age 15+)'
(unemployment_summary[['Counts', 'Cumulative counts', 'Percentages', 'Cumulative percentages']]
.style.set_precision(3)
.set_properties(**{'text-align':'right'}))
Explanation: Unemployment rate frequencies
End of explanation |
11,840 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Simple change detection using annual mean NDVI
This example notebook describes how to retrieve data for a small region and epoch of interest, concatenate data from available sensors and calculate the annual mean NDVI values. You can then select a location of interest based on the change between years, retrieve an NDVI time series for that location and select imagery from before and after the change event
Step1: PQ and Index preparation
Step3: Plotting an image, view the transect and select a location to retrieve a time series
Step4: This section is for viewing a time series of NDVI - and retrieving the image that corresponds with a particular point on a time series | Python Code:
%pylab notebook
from __future__ import print_function
import datacube
import xarray as xr
from datacube.helpers import ga_pq_fuser
from datacube.storage import masking
from datacube.storage.masking import mask_to_dict
from matplotlib.backends.backend_pdf import PdfPages
from matplotlib import pyplot as plt
import matplotlib.dates
from IPython.display import display
import ipywidgets as widgets
import rasterio
dc = datacube.Datacube(app='dc-show changes in annual mean NDVI values')
#### DEFINE SPATIOTEMPORAL RANGE AND BANDS OF INTEREST
#Use this to manually define an upper left/lower right coords
#Either as polygon or as lat/lon range
#Define temporal range
start_of_epoch = '2008-01-01'
#need a variable here that defines a rolling 'latest observation'
end_of_epoch = '2013-12-31'
#Define wavelengths/bands of interest, remove this kwarg to retrieve all bands
bands_of_interest = [#'blue',
'green',
'red',
'nir',
'swir1',
#'swir2'
]
#Define sensors of interest, # out sensors that aren't relevant for the time period
sensors = [
'ls8', #May 2013 to present
'ls7', #1999 to present
'ls5' #1986 to present, full contintal coverage from 1987 onwards
]
query = {
'time': (start_of_epoch, end_of_epoch),
}
#The example shown here is for the Black Saturday Fires in Victoria, but you can update with coordinates for
#your area of interest
lat_max = -37.42
lat_min = -37.6
lon_max = 145.35
lon_min = 145.1
query['x'] = (lon_min, lon_max)
query['y'] = (lat_max, lat_min)
query['crs'] = 'EPSG:4326'
print(query)
Explanation: Simple change detection using annual mean NDVI
This example notebook describes how to retrieve data for a small region and epoch of interest, concatenate data from available sensors and calculate the annual mean NDVI values. You can then select a location of interest based on the change between years, retrieve an NDVI time series for that location and select imagery from before and after the change event
End of explanation
#Define which pixel quality artefacts you want removed from the results
mask_components = {'cloud_acca':'no_cloud',
'cloud_shadow_acca' :'no_cloud_shadow',
'cloud_shadow_fmask' : 'no_cloud_shadow',
'cloud_fmask' :'no_cloud',
'blue_saturated' : False,
'green_saturated' : False,
'red_saturated' : False,
'nir_saturated' : False,
'swir1_saturated' : False,
'swir2_saturated' : False,
'contiguous':True}
#Retrieve the NBAR and PQ data for sensor n
sensor_clean = {}
for sensor in sensors:
#Load the NBAR and corresponding PQ
sensor_nbar = dc.load(product= sensor+'_nbar_albers', group_by='solar_day', measurements = bands_of_interest, **query)
sensor_pq = dc.load(product= sensor+'_pq_albers', group_by='solar_day', fuse_func=ga_pq_fuser, **query)
#grab the projection info before masking/sorting
crs = sensor_nbar.crs
crswkt = sensor_nbar.crs.wkt
affine = sensor_nbar.affine
#Apply the PQ masks to the NBAR
cloud_free = masking.make_mask(sensor_pq, **mask_components)
good_data = cloud_free.pixelquality.loc[start_of_epoch:end_of_epoch]
sensor_nbar = sensor_nbar.where(good_data)
sensor_clean[sensor] = sensor_nbar
#Concatenate data from different sensors together and sort so that observations are sorted by time rather
# than sensor
nbar_clean = xr.concat(sensor_clean.values(), dim='time')
time_sorted = nbar_clean.time.argsort()
nbar_clean = nbar_clean.isel(time=time_sorted)
nbar_clean.attrs['crs'] = crs
nbar_clean.attrs['affine'] = affine
#Calculate NDVI
ndvi = ((nbar_clean.nir-nbar_clean.red)/(nbar_clean.nir+nbar_clean.red))
#This controls the colour maps used for plotting NDVI
ndvi_cmap = mpl.colors.ListedColormap(['blue', '#ffcc66','#ffffcc' , '#ccff66' , '#2eb82e', '#009933' , '#006600'])
ndvi_bounds = [-1, 0, 0.1, 0.25, 0.35, 0.5, 0.8, 1]
ndvi_norm = mpl.colors.BoundaryNorm(ndvi_bounds, ndvi_cmap.N)
ndvi.attrs['crs'] = crs
ndvi.attrs['affine'] = affine
#Calculate annual average NDVI values
annual_ndvi = ndvi.groupby('time.year')
annual_mean = annual_ndvi.mean(dim = 'time') #The .mean argument here can be replaced by max, min, median
#but you'll need to update the code below here accordingly
Explanation: PQ and Index preparation
End of explanation
fig = plt.figure()
#Plot the mean NDVI values for a year of interest (yoi)
#Dark green = high amounts of green vegetation through to yellows and oranges being lower amounts of vegetation,
#Blue indicates a NDVI < 0 typically associated with water
yoi = 2009
plt.title('Average annual NDVI for '+str(yoi))
arr_yoi = annual_mean.sel(year =yoi)
plt.imshow(arr_yoi.squeeze(), interpolation = 'nearest', cmap = ndvi_cmap, norm = ndvi_norm)
extent=[scaled.coords['x'].min(), scaled.coords['x'].max(),
scaled.coords['y'].min(), scaled.coords['y'].max()])
#Calculate the difference between in mean NDVI between two years, a reference year and a change year
fig = plt.figure()
#Define the year you wish to use as a reference point
ref_year = 2008
#Define the year you wish to use to detect change
change_year = 2009
nd_ref_year = annual_mean.sel(year = (ref_year))
nd_change_year =annual_mean.sel(year = (change_year))
nd_dif = nd_change_year - nd_ref_year
nd_dif.plot(cmap = 'RdYlGn')
#Click on this image to chose the location for time series extraction
w = widgets.HTML("Event information appears here when you click on the figure")
def callback(event):
global x, y
x, y = int(event.xdata + 0.5), int(event.ydata + 0.5)
w.value = 'X: {}, Y: {}'.format(x,y)
fig.canvas.mpl_connect('button_press_event', callback)
plt.title('Change in mean NDVI between '+str(ref_year)+' and '+str(change_year))
plt.show()
display(w)
Explanation: Plotting an image, view the transect and select a location to retrieve a time series
End of explanation
#this converts the map x coordinate into image x coordinates
image_coords = ~affine * (x, y)
imagex = int(image_coords[0])
imagey = int(image_coords[1])
#retrieve the time series for the pixel location clicked above
ts_ndvi = ndvi.isel(x=[imagex],y=[imagey]).dropna('time', how = 'any')
#Use this plot to visualise a time series and select the image that corresponds with a point in the time series
def callback(event):
global time_int, devent
devent = event
time_int = event.xdata
#time_int_ = time_int.astype(datetime64[D])
w.value = 'time_int: {}'.format(time_int)
fig = plt.figure(figsize=(8,5))
fig.canvas.mpl_connect('button_press_event', callback)
plt.show()
display(w)
#firstyear = '2010-01-01'
#lastyear = '2014-12-31'
ts_ndvi.plot(linestyle= '--', c= 'b', marker = '8', mec = 'b', mfc ='r')
plt.grid()
#plt.axis([firstyear , lastyear ,0, 1])
#Convert the point clicked in the time series to a date and retrieve the corresponding image
time_slice = matplotlib.dates.num2date(time_int).date()
rgb = nbar_clean.sel(time =time_slice, method = 'nearest').to_array(dim='color').sel(color=['swir1', 'nir', 'green']).transpose('y', 'x', 'color')
fake_saturation = 6000
clipped_visible = rgb.where(rgb<fake_saturation).fillna(fake_saturation)
max_val = clipped_visible.max(['y', 'x'])
scaled = (clipped_visible / max_val)
#This image shows the time slice of choice and the location of the time series
fig = plt.figure(figsize =(12,6))
#plt.scatter(x=trans.coords['x'], y=trans.coords['y'], c='r')
plt.scatter(x = [x], y = [y], c= 'yellow', marker = 'D')
plt.imshow(scaled, interpolation = 'nearest',
extent=[scaled.coords['x'].min(), scaled.coords['x'].max(),
scaled.coords['y'].min(), scaled.coords['y'].max()])
plt.title(time_slice)
plt.show()
Explanation: This section is for viewing a time series of NDVI - and retrieving the image that corresponds with a particular point on a time series
End of explanation |
11,841 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Initial setup
Step1: Ploynomial Least-Squares Fit Algorithm
Step2: Mean-Square Error as Measure of Fit
Step3: Best Historical MSE = Best Future Performance?
Step4: Overfitting!
So what now? This is where we use all of the methods for preventing overfitting | Python Code:
## Setup the path for our codebase
import sys
sys.path.append( '../code/' )
## import our time_series codebase
import time_series.generated_datasets
import time_series.result_set
import time_series.algorithm
dataset_0 = time_series.generated_datasets.DSS[0]( 10 )
dataset_0.taxonomy
%matplotlib inline
import matplotlib.pyplot as plt
time_series.generated_datasets.plot_dataset( dataset_0 )
Explanation: Initial setup
End of explanation
## import numpy for polyfit
import numpy as np
##
# Define a new algorithm for a polynomial least-squares fit
class PolyFitAlg( time_series.algorithm.AlertAlgorithm):
def __init__(self, order):
self.order = order
time_series.algorithm.AlertAlgorithm.__init__( self, "Polyfit[{0}]".format(order) )
def __call__( self, target, history ):
# fit the polynomail to history
n = len(history)
poly = np.poly1d( np.polyfit( xrange(n), history, self.order ) )
expected = poly(n)
difference = abs(target - expected)
if target != 0:
fraction = difference / abs(target)
else:
# Assume target is actuall 1, so absolute difference instead of fraction
fraction = difference
result = {
'target' : target,
'expected' : expected,
'order' : self.order,
'difference' : difference,
'fraction' : fraction,
'poly' : poly,
}
return fraction, result
alg_pf = PolyFitAlg( 4 )
frac, res = alg_pf( 10.0, xrange(10) )
res
plt.plot( xrange(13), res['poly']( xrange(13) ))
frac,res = alg_pf( dataset_0.time_series[-1], dataset_0.time_series[:-1] )
res
n = len(dataset_0.time_series)
plt.plot( xrange(n), res['poly']( xrange(n)) )
plt.hold( True )
plt.plot( xrange(n-1), dataset_0.time_series[:-1], 'r.' )
dataset_0.taxonomy
n = len(dataset_0.time_series)
plt.plot( xrange(n), res['poly']( xrange(n)) )
plt.hold( True )
plt.plot( xrange(n), dataset_0.time_series, 'r.' )
alg_pf8 = PolyFitAlg( 8 )
frac8,res8 = alg_pf8( dataset_0.time_series[-1], dataset_0.time_series[:-1] )
n = len(dataset_0.time_series)
plt.figure()
plt.plot( xrange(n-1), res8['poly']( xrange(n-1)) )
plt.hold( True )
plt.plot( xrange(n-1), dataset_0.time_series[:-1], 'r.', ms=10 )
plt.figure()
plt.plot( xrange(n + 2), res8['poly']( xrange(n + 2)) )
plt.hold( True )
plt.plot( xrange(n), dataset_0.time_series, 'r.', ms=10 )
Explanation: Ploynomial Least-Squares Fit Algorithm
End of explanation
## compute the mean squared error for a history and a PolyFit algorithm
def polyfit_mse( alg, history ):
# first fit the algorithm with a dummy target
frac, res = alg( history[-1], history )
# ok, grab polynomial from fit and compute errors
poly = res['poly']
x = xrange(len(history))
errors = np.array(history) - poly(x)
# compute mean squared error
mse = np.mean( errors * errors.transpose() )
return mse
mse_pf4 = polyfit_mse( alg_pf, dataset_0.time_series[:-1] )
mse_pf8 = polyfit_mse( alg_pf8, dataset_0.time_series[:-1] )
print "order 4 MSE: {0}".format( mse_pf4 )
print "order 8 MSE: {0}".format( mse_pf8 )
Explanation: Mean-Square Error as Measure of Fit
End of explanation
run_spec_pf4 = time_series.result_set.RunSpec( time_series.generated_datasets.DSS[0], alg_pf)
run_spec_pf8 = time_series.result_set.RunSpec( time_series.generated_datasets.DSS[0], alg_pf8)
rset_pf4 = run_spec_pf4.collect_results( 20, 5, 9 )
rset_pf8 = run_spec_pf8.collect_results( 20, 5, 9 )
stats_pf4 = time_series.result_set.compute_classifier_stats( rset_pf4, 0.5 )
stats_pf8 = time_series.result_set.compute_classifier_stats( rset_pf8, 0.5 )
print "order 4 stats: {0}".format( stats_pf4 )
print "order 8 stats: {0}".format( stats_pf8 )
Explanation: Best Historical MSE = Best Future Performance?
End of explanation
aic( alg ) = MSE( alg ) + log(N) * COMPLEXITY
Explanation: Overfitting!
So what now? This is where we use all of the methods for preventing overfitting:
- Cross-validation based model selection
- AIC/BIC all those information criteria which weight model fit versus moder complexity
- Algorithm Stability ( stability == generalizability )
- Statistical Learning Theory to learn using Structures (Structure Learning Theory)
- Choose a better performance criteria than historical fit
End of explanation |
11,842 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Pythonic pandas
Using a tutorial about fast flexible pandas.
The pandas Python package is an effective way to examine and manipulate data.
The source code is available on Github, and be sure to check out pandas' library of extension modules.
Be careful when writing code for pandas, because Pythonic code may not necessarily be a good idea.
Like NumPy, pandas is designed for vectorized operations that replace explicit loops with array expressions.
This tutorial will attempt to demonstrate Pythonic pandas that will make the best use of the language and the library.
Our Task
The goal of this example will be to apply time-of-use energy tariffs to find the total cost of energy consumption for one year.
Make sure that you are up to speed with basic data selection and indexing.
Our problem is that at different hours of the day, the price for electricity varies, so the task is to multiply the electricity consumed for each hour by the correct price for the hour in which it was consumed.
Let’s read our data from a CSV file that has two columns
Step1: The rows contains the electricity used in each hour for a one year period.
Each row indicates the usage for the hour starting at the specified time, so 1/1/13 0
Step2: Both pandas and Numpy use the concept of dtypes as data types, and if no arguments are specified, date_time will take on an object dtype.
Step3: This will be an issue with any column that can't neatly fit into a single data type.
Working with dates as strings is also an inefficient use of memory and programmer time (not to mention patience).
This exercise will work with time series data, and the date_time column will be formatted as an array of datetime objects called a pandas.Timestamp.
Step4: If you're curious about alternatives to the code above, check out pandas.PeriodIndex, which can store ordinal values indicating regular time periods.
We now have a pandas.DataFrame called nrg that contains the data from our .csv file.
Notice how the time is displayed differently in the date_time column.
Step5: Time for a timing decorator
The code above is pretty straightforward, but how fast does it run?
Let's find out by using a timing decorator called @timeit (an homage to Python's timeit).
This decorator behaves like timeit.repeat(), but it also allows you to return the result of the function itself as well as get the average runtime from multiple trials.
When you create a function and place the @timeit decorator above it, the function will be timed every time it is called.
Keep in mind that the decorator runs an outer and an inner loop.
Step6: One easily overlooked detail is that the datetimes in the energy_consumption.csv file are not in ISO 8601 format.
You need YYYY-MM-DD HH
Step8: However, our hourly costs depend on the time of day.
If you use a loop to do the conditional calculation, you are not using pandas the way it was intended.
For the rest of this tutorial, you'll start with a sub-optimal solution and work your way up to a Pythonic approach that leverages the full power of pandas.
Take a look at a loop approach and see how it performs using our timing decorator.
Step10: Now for a computationally expensive and non-Pythonic loop
Step11: You can consider the above to be an “antipattern” in pandas for several reasons.
First, initialize a list in which the outputs will be recorded.
Second, use the opaque object range(0, len(df)) to loop through nrg, then apply apply_rate(), and append the result to a list used to make the new DataFrame column.
Third, chained indexing with df.iloc[i]['date_time'] may lead to unintended results.
Each of these increase the time cost of the calculations.
On my machine, this loop took about 3 seconds for 8760 rows of data.
Next, you’ll look at some improved solutions for iteration over Pandas structures.
Looping with .itertuples() and .iterrows()
Instead of looping through a range of objects, you can use generator methods that yield one row at a time.
.itertuples() yields a namedtuple() for each row, with the row’s index value as the first element of the tuple.
A namedtuple() is a data structure from Python’s collections module that behaves like a Python tuple but has fields accessible by attribute lookup.
.iterrows() yields pairs (tuples) of (index, Series) for each row in the DataFrame.
While .itertuples() tends to be a bit faster, let’s focus on pandas and use .iterrows() in this example. | Python Code:
import pandas as pd
pd.__version__
nrg = pd.read_csv('energy_consumption.csv'); nrg.describe(include='all')
Explanation: Pythonic pandas
Using a tutorial about fast flexible pandas.
The pandas Python package is an effective way to examine and manipulate data.
The source code is available on Github, and be sure to check out pandas' library of extension modules.
Be careful when writing code for pandas, because Pythonic code may not necessarily be a good idea.
Like NumPy, pandas is designed for vectorized operations that replace explicit loops with array expressions.
This tutorial will attempt to demonstrate Pythonic pandas that will make the best use of the language and the library.
Our Task
The goal of this example will be to apply time-of-use energy tariffs to find the total cost of energy consumption for one year.
Make sure that you are up to speed with basic data selection and indexing.
Our problem is that at different hours of the day, the price for electricity varies, so the task is to multiply the electricity consumed for each hour by the correct price for the hour in which it was consumed.
Let’s read our data from a CSV file that has two columns: one for date plus time and one for electrical energy consumed in kilowatt hours (kWh):
End of explanation
nrg.head()
Explanation: The rows contains the electricity used in each hour for a one year period.
Each row indicates the usage for the hour starting at the specified time, so 1/1/13 0:00 indicates the usage for the first hour of January 1st.
Working with datetime data
Let's take a closer look at our data:
End of explanation
nrg.dtypes
# https://docs.python.org/3/library/functions.html#type
# https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iat.html
type(nrg.iat[0,0])
Explanation: Both pandas and Numpy use the concept of dtypes as data types, and if no arguments are specified, date_time will take on an object dtype.
End of explanation
nrg['date_time'] = pd.to_datetime(nrg['date_time'])
# https://stackoverflow.com/questions/29206612/difference-between-data-type-datetime64ns-and-m8ns
nrg['date_time'].dtype
Explanation: This will be an issue with any column that can't neatly fit into a single data type.
Working with dates as strings is also an inefficient use of memory and programmer time (not to mention patience).
This exercise will work with time series data, and the date_time column will be formatted as an array of datetime objects called a pandas.Timestamp.
End of explanation
nrg.head()
Explanation: If you're curious about alternatives to the code above, check out pandas.PeriodIndex, which can store ordinal values indicating regular time periods.
We now have a pandas.DataFrame called nrg that contains the data from our .csv file.
Notice how the time is displayed differently in the date_time column.
End of explanation
from timer import timeit
@timeit(repeat=3, number=10)
def convert_with_format(nrg, column_name):
return pd.to_datetime(nrg[column_name], format='%d/%m/%y %H:%M')
nrg['date_time'] = convert_with_format(nrg, 'date_time')
Explanation: Time for a timing decorator
The code above is pretty straightforward, but how fast does it run?
Let's find out by using a timing decorator called @timeit (an homage to Python's timeit).
This decorator behaves like timeit.repeat(), but it also allows you to return the result of the function itself as well as get the average runtime from multiple trials.
When you create a function and place the @timeit decorator above it, the function will be timed every time it is called.
Keep in mind that the decorator runs an outer and an inner loop.
End of explanation
nrg['cost_cents'] = nrg['energy_kwh'] * 28; nrg.head()
Explanation: One easily overlooked detail is that the datetimes in the energy_consumption.csv file are not in ISO 8601 format.
You need YYYY-MM-DD HH:MM.
If you don’t specify a format, Pandas will use the dateutil package to convert each string to a date.
Conversely, if the raw datetime data is already in ISO 8601 format, pandas can immediately take a fast route to parsing the dates.
This is one reason why being explicit about the format is so beneficial here.
Another option is to pass the infer_datetime_format=True parameter, but it generally pays to be explicit.
Also, remember that pandas' read_csv() method allows you to parse dates as part of the file I/O using the parse_dates, infer_datetime_format, and date_parser parameters.
Simple Looping Over Pandas Data
Now that dates and times are in a tidy format, we can begin calculating electricity costs.
Cost varies by hour, so a cost factor is conditionally applied for each hour of the day:
| Usage | Cents per kWh | Time Range |
|-------------|----------------|----------------|
| Peak | 28 | 17:00 to 24:00 |
| Shoulder | 20 | 7:00 to 17:00 |
| Off-Peak | 12 | 0:00 to 7:00 |
If costs were a flat rate of 28 cents per kilowatt hour every hour, we could just do this:
End of explanation
# Create a function to apply the appropriate rate to the given hour:
def apply_rate(kwh, hour):
Calculates the cost of electricity for a given hour.
if 0 <= hour < 7:
rate = 12
elif 7 <= hour <= 17:
rate = 20
elif 17 <= hour <= 24:
rate = 28
else:
# +1 for error handling:
raise ValueError(f'Invalid datetime entry: {hour}')
return rate * kwh
Explanation: However, our hourly costs depend on the time of day.
If you use a loop to do the conditional calculation, you are not using pandas the way it was intended.
For the rest of this tutorial, you'll start with a sub-optimal solution and work your way up to a Pythonic approach that leverages the full power of pandas.
Take a look at a loop approach and see how it performs using our timing decorator.
End of explanation
# Not the best way:
@timeit(repeat=2, number = 10)
def apply_rate_loop(nrg):
Calculate the costs using a loop, and modify `nrg` dataframe in place.
energy_cost_list = []
for i in range(len(nrg)):
# Get electricity used and the corresponding rate.
energy_used = nrg.iloc[i]['energy_kwh']
hour = nrg.iloc[i]['date_time'].hour
energy_cost = apply_rate(energy_used, hour)
energy_cost_list.append(energy_cost)
nrg['cost_cents'] = energy_cost_list
apply_rate_loop(nrg)
Explanation: Now for a computationally expensive and non-Pythonic loop:
End of explanation
@timeit(repeat=2, number=10)
def apply_rate_iterrows(nrg):
energy_cost_list = []
for index, row in nrg.iterrows():
energy_used = row['energy_kwh']
hour = row['date_time'].hour
energy_cost = apply_rate(energy_used, hour)
energy_cost_list.append(energy_cost)
nrg['cost_cents'] = energy_cost_list
apply_rate_iterrows(nrg)
Explanation: You can consider the above to be an “antipattern” in pandas for several reasons.
First, initialize a list in which the outputs will be recorded.
Second, use the opaque object range(0, len(df)) to loop through nrg, then apply apply_rate(), and append the result to a list used to make the new DataFrame column.
Third, chained indexing with df.iloc[i]['date_time'] may lead to unintended results.
Each of these increase the time cost of the calculations.
On my machine, this loop took about 3 seconds for 8760 rows of data.
Next, you’ll look at some improved solutions for iteration over Pandas structures.
Looping with .itertuples() and .iterrows()
Instead of looping through a range of objects, you can use generator methods that yield one row at a time.
.itertuples() yields a namedtuple() for each row, with the row’s index value as the first element of the tuple.
A namedtuple() is a data structure from Python’s collections module that behaves like a Python tuple but has fields accessible by attribute lookup.
.iterrows() yields pairs (tuples) of (index, Series) for each row in the DataFrame.
While .itertuples() tends to be a bit faster, let’s focus on pandas and use .iterrows() in this example.
End of explanation |
11,843 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers
Step2: Important Note
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step9: Expected Output
Step10: 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
Step11: 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
Step12: This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
Step13: Note
Step14: You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.
3.4 - Filtering boxes
yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.
Step16: 3.5 - Run the graph on an image
Let the fun begin. You have created a (sess) graph that can be summarized as follows
Step17: Run the following cell on the "test.jpg" image to verify that your function is correct. | Python Code:
import argparse
import os
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow
import scipy.io
import scipy.misc
import numpy as np
import pandas as pd
import PIL
import tensorflow as tf
from keras import backend as K
from keras.layers import Input, Lambda, Conv2D
from keras.models import load_model, Model
from yolo_utils import read_classes, read_anchors, generate_colors, preprocess_image, draw_boxes, scale_boxes
from yad2k.models.keras_yolo import yolo_head, yolo_boxes_to_corners, preprocess_true_boxes, yolo_loss, yolo_body
%matplotlib inline
Explanation: Autonomous driving - Car detection
Welcome to your week 3 programming assignment. You will learn about object detection using the very powerful YOLO model. Many of the ideas in this notebook are described in the two YOLO papers: Redmon et al., 2016 (https://arxiv.org/abs/1506.02640) and Redmon and Farhadi, 2016 (https://arxiv.org/abs/1612.08242).
You will learn to:
- Use object detection on a car detection dataset
- Deal with bounding boxes
Run the following cell to load the packages and dependencies that are going to be useful for your journey!
End of explanation
# GRADED FUNCTION: yolo_filter_boxes
def yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = .6):
Filters YOLO boxes by thresholding on object and class confidence.
Arguments:
box_confidence -- tensor of shape (19, 19, 5, 1)
boxes -- tensor of shape (19, 19, 5, 4)
box_class_probs -- tensor of shape (19, 19, 5, 80)
threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
Returns:
scores -- tensor of shape (None,), containing the class probability score for selected boxes
boxes -- tensor of shape (None, 4), containing (b_x, b_y, b_h, b_w) coordinates of selected boxes
classes -- tensor of shape (None,), containing the index of the class detected by the selected boxes
Note: "None" is here because you don't know the exact number of selected boxes, as it depends on the threshold.
For example, the actual output size of scores would be (10,) if there are 10 boxes.
# Step 1: Compute box scores
### START CODE HERE ### (≈ 1 line)
box_scores = box_confidence * box_class_probs
### END CODE HERE ###
# Step 2: Find the box_classes thanks to the max box_scores, keep track of the corresponding score
### START CODE HERE ### (≈ 2 lines)
box_classes = K.argmax(box_class_probs, -1)
box_class_scores = K.max(box_scores, -1)
### END CODE HERE ###
# Step 3: Create a filtering mask based on "box_class_scores" by using "threshold". The mask should have the
# same dimension as box_class_scores, and be True for the boxes you want to keep (with probability >= threshold)
### START CODE HERE ### (≈ 1 line)
filtering_mask = box_class_scores >= threshold
### END CODE HERE ###
# Step 4: Apply the mask to scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = tf.boolean_mask(box_class_scores, filtering_mask)
# print(filtering_mask.shape, boxes.shape, box_class_scores.shape, box_classes.shape)
boxes = tf.boolean_mask(boxes, filtering_mask)
classes = tf.boolean_mask(box_classes, filtering_mask)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_a:
box_confidence = tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([19, 19, 5, 4], mean=1, stddev=4, seed = 1)
box_class_probs = tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, threshold = 0.5)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.shape))
print("boxes.shape = " + str(boxes.shape))
print("classes.shape = " + str(classes.shape))
Explanation: Important Note: As you can see, we import Keras's backend as K. This means that to use a Keras function in this notebook, you will need to write: K.function(...).
1 - Problem Statement
You are working on a self-driving car. As a critical component of this project, you'd like to first build a car detection system. To collect data, you've mounted a camera to the hood (meaning the front) of the car, which takes pictures of the road ahead every few seconds while you drive around.
<center>
<video width="400" height="200" src="nb_images/road_video_compressed2.mp4" type="video/mp4" controls>
</video>
</center>
<caption><center> Pictures taken from a car-mounted camera while driving around Silicon Valley. <br> We would like to especially thank drive.ai for providing this dataset! Drive.ai is a company building the brains of self-driving vehicles.
</center></caption>
<img src="nb_images/driveai.png" style="width:100px;height:100;">
You've gathered all these images into a folder and have labelled them by drawing bounding boxes around every car you found. Here's an example of what your bounding boxes look like.
<img src="nb_images/box_label.png" style="width:500px;height:250;">
<caption><center> <u> Figure 1 </u>: Definition of a box<br> </center></caption>
If you have 80 classes that you want YOLO to recognize, you can represent the class label $c$ either as an integer from 1 to 80, or as an 80-dimensional vector (with 80 numbers) one component of which is 1 and the rest of which are 0. The video lectures had used the latter representation; in this notebook, we will use both representations, depending on which is more convenient for a particular step.
In this exercise, you will learn how YOLO works, then apply it to car detection. Because the YOLO model is very computationally expensive to train, we will load pre-trained weights for you to use.
2 - YOLO
YOLO ("you only look once") is a popular algoritm because it achieves high accuracy while also being able to run in real-time. This algorithm "only looks once" at the image in the sense that it requires only one forward propagation pass through the network to make predictions. After non-max suppression, it then outputs recognized objects together with the bounding boxes.
2.1 - Model details
First things to know:
- The input is a batch of images of shape (m, 608, 608, 3)
- The output is a list of bounding boxes along with the recognized classes. Each bounding box is represented by 6 numbers $(p_c, b_x, b_y, b_h, b_w, c)$ as explained above. If you expand $c$ into an 80-dimensional vector, each bounding box is then represented by 85 numbers.
We will use 5 anchor boxes. So you can think of the YOLO architecture as the following: IMAGE (m, 608, 608, 3) -> DEEP CNN -> ENCODING (m, 19, 19, 5, 85).
Lets look in greater detail at what this encoding represents.
<img src="nb_images/architecture.png" style="width:700px;height:400;">
<caption><center> <u> Figure 2 </u>: Encoding architecture for YOLO<br> </center></caption>
If the center/midpoint of an object falls into a grid cell, that grid cell is responsible for detecting that object.
Since we are using 5 anchor boxes, each of the 19 x19 cells thus encodes information about 5 boxes. Anchor boxes are defined only by their width and height.
For simplicity, we will flatten the last two last dimensions of the shape (19, 19, 5, 85) encoding. So the output of the Deep CNN is (19, 19, 425).
<img src="nb_images/flatten.png" style="width:700px;height:400;">
<caption><center> <u> Figure 3 </u>: Flattening the last two last dimensions<br> </center></caption>
Now, for each box (of each cell) we will compute the following elementwise product and extract a probability that the box contains a certain class.
<img src="nb_images/probability_extraction.png" style="width:700px;height:400;">
<caption><center> <u> Figure 4 </u>: Find the class detected by each box<br> </center></caption>
Here's one way to visualize what YOLO is predicting on an image:
- For each of the 19x19 grid cells, find the maximum of the probability scores (taking a max across both the 5 anchor boxes and across different classes).
- Color that grid cell according to what object that grid cell considers the most likely.
Doing this results in this picture:
<img src="nb_images/proba_map.png" style="width:300px;height:300;">
<caption><center> <u> Figure 5 </u>: Each of the 19x19 grid cells colored according to which class has the largest predicted probability in that cell.<br> </center></caption>
Note that this visualization isn't a core part of the YOLO algorithm itself for making predictions; it's just a nice way of visualizing an intermediate result of the algorithm.
Another way to visualize YOLO's output is to plot the bounding boxes that it outputs. Doing that results in a visualization like this:
<img src="nb_images/anchor_map.png" style="width:200px;height:200;">
<caption><center> <u> Figure 6 </u>: Each cell gives you 5 boxes. In total, the model predicts: 19x19x5 = 1805 boxes just by looking once at the image (one forward pass through the network)! Different colors denote different classes. <br> </center></caption>
In the figure above, we plotted only boxes that the model had assigned a high probability to, but this is still too many boxes. You'd like to filter the algorithm's output down to a much smaller number of detected objects. To do so, you'll use non-max suppression. Specifically, you'll carry out these steps:
- Get rid of boxes with a low score (meaning, the box is not very confident about detecting a class)
- Select only one box when several boxes overlap with each other and detect the same object.
2.2 - Filtering with a threshold on class scores
You are going to apply a first filter by thresholding. You would like to get rid of any box for which the class "score" is less than a chosen threshold.
The model gives you a total of 19x19x5x85 numbers, with each box described by 85 numbers. It'll be convenient to rearrange the (19,19,5,85) (or (19,19,425)) dimensional tensor into the following variables:
- box_confidence: tensor of shape $(19 \times 19, 5, 1)$ containing $p_c$ (confidence probability that there's some object) for each of the 5 boxes predicted in each of the 19x19 cells.
- boxes: tensor of shape $(19 \times 19, 5, 4)$ containing $(b_x, b_y, b_h, b_w)$ for each of the 5 boxes per cell.
- box_class_probs: tensor of shape $(19 \times 19, 5, 80)$ containing the detection probabilities $(c_1, c_2, ... c_{80})$ for each of the 80 classes for each of the 5 boxes per cell.
Exercise: Implement yolo_filter_boxes().
1. Compute box scores by doing the elementwise product as described in Figure 4. The following code may help you choose the right operator:
python
a = np.random.randn(19*19, 5, 1)
b = np.random.randn(19*19, 5, 80)
c = a * b # shape of c will be (19*19, 5, 80)
2. For each box, find:
- the index of the class with the maximum box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
- the corresponding box score (Hint) (Be careful with what axis you choose; consider using axis=-1)
3. Create a mask by using a threshold. As a reminder: ([0.9, 0.3, 0.4, 0.5, 0.1] < 0.4) returns: [False, True, False, False, True]. The mask should be True for the boxes you want to keep.
4. Use TensorFlow to apply the mask to box_class_scores, boxes and box_classes to filter out the boxes we don't want. You should be left with just the subset of boxes you want to keep. (Hint)
Reminder: to call a Keras function, you should use K.function(...).
End of explanation
# GRADED FUNCTION: iou
def iou(box1, box2):
Implement the intersection over union (IoU) between box1 and box2
Arguments:
box1 -- first box, list object with coordinates (x1, y1, x2, y2)
box2 -- second box, list object with coordinates (x1, y1, x2, y2)
# Calculate the (y1, x1, y2, x2) coordinates of the intersection of box1 and box2. Calculate its Area.
### START CODE HERE ### (≈ 5 lines)
xi1 = max(box1[0], box2[0])
yi1 = max(box1[1], box2[1])
xi2 = min(box1[2], box2[2])
yi2 = min(box1[3], box2[3])
inter_area = (xi2-xi1) * (yi2-yi1)
### END CODE HERE ###
# Calculate the Union area by using Formula: Union(A,B) = A + B - Inter(A,B)
### START CODE HERE ### (≈ 3 lines)
box1_area = (box1[2]-box1[0]) * (box1[3]-box1[1])
box2_area = (box2[2]-box2[0]) * (box2[3]-box2[1])
union_area = box1_area + box2_area - inter_area
### END CODE HERE ###
# compute the IoU
### START CODE HERE ### (≈ 1 line)
iou = inter_area / union_area
### END CODE HERE ###
return iou
box1 = (2, 1, 4, 3)
box2 = (1, 2, 3, 4)
print("iou = " + str(iou(box1, box2)))
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
10.7506
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 8.42653275 3.27136683 -0.5313437 -4.94137383]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
7
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(?,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(?, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(?,)
</td>
</tr>
</table>
2.3 - Non-max suppression
Even after filtering by thresholding over the classes scores, you still end up a lot of overlapping boxes. A second filter for selecting the right boxes is called non-maximum suppression (NMS).
<img src="nb_images/non-max-suppression.png" style="width:500px;height:400;">
<caption><center> <u> Figure 7 </u>: In this example, the model has predicted 3 cars, but it's actually 3 predictions of the same car. Running non-max suppression (NMS) will select only the most accurate (highest probabiliy) one of the 3 boxes. <br> </center></caption>
Non-max suppression uses the very important function called "Intersection over Union", or IoU.
<img src="nb_images/iou.png" style="width:500px;height:400;">
<caption><center> <u> Figure 8 </u>: Definition of "Intersection over Union". <br> </center></caption>
Exercise: Implement iou(). Some hints:
- In this exercise only, we define a box using its two corners (upper left and lower right): (x1, y1, x2, y2) rather than the midpoint and height/width.
- To calculate the area of a rectangle you need to multiply its height (y2 - y1) by its width (x2 - x1)
- You'll also need to find the coordinates (xi1, yi1, xi2, yi2) of the intersection of two boxes. Remember that:
- xi1 = maximum of the x1 coordinates of the two boxes
- yi1 = maximum of the y1 coordinates of the two boxes
- xi2 = minimum of the x2 coordinates of the two boxes
- yi2 = minimum of the y2 coordinates of the two boxes
In this code, we use the convention that (0,0) is the top-left corner of an image, (1,0) is the upper-right corner, and (1,1) the lower-right corner.
End of explanation
# GRADED FUNCTION: yolo_non_max_suppression
def yolo_non_max_suppression(scores, boxes, classes, max_boxes = 10, iou_threshold = 0.5):
Applies Non-max suppression (NMS) to set of boxes
Arguments:
scores -- tensor of shape (None,), output of yolo_filter_boxes()
boxes -- tensor of shape (None, 4), output of yolo_filter_boxes() that have been scaled to the image size (see later)
classes -- tensor of shape (None,), output of yolo_filter_boxes()
max_boxes -- integer, maximum number of predicted boxes you'd like
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (, None), predicted score for each box
boxes -- tensor of shape (4, None), predicted box coordinates
classes -- tensor of shape (, None), predicted class for each box
Note: The "None" dimension of the output tensors has obviously to be less than max_boxes. Note also that this
function will transpose the shapes of scores, boxes, classes. This is made for convenience.
max_boxes_tensor = K.variable(max_boxes, dtype='int32') # tensor to be used in tf.image.non_max_suppression()
K.get_session().run(tf.variables_initializer([max_boxes_tensor])) # initialize variable max_boxes_tensor
# Use tf.image.non_max_suppression() to get the list of indices corresponding to boxes you keep
### START CODE HERE ### (≈ 1 line)
nms_indices = tf.image.non_max_suppression(boxes, scores, max_boxes, iou_threshold)
### END CODE HERE ###
# Use K.gather() to select only nms_indices from scores, boxes and classes
### START CODE HERE ### (≈ 3 lines)
scores = K.gather(scores, nms_indices)
boxes = K.gather(boxes, nms_indices)
classes = K.gather(classes, nms_indices)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
scores = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
boxes = tf.random_normal([54, 4], mean=1, stddev=4, seed = 1)
classes = tf.random_normal([54,], mean=1, stddev=4, seed = 1)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
Explanation: Expected Output:
<table>
<tr>
<td>
**iou = **
</td>
<td>
0.14285714285714285
</td>
</tr>
</table>
You are now ready to implement non-max suppression. The key steps are:
1. Select the box that has the highest score.
2. Compute its overlap with all other boxes, and remove boxes that overlap it more than iou_threshold.
3. Go back to step 1 and iterate until there's no more boxes with a lower score than the current selected box.
This will remove all boxes that have a large overlap with the selected boxes. Only the "best" boxes remain.
Exercise: Implement yolo_non_max_suppression() using TensorFlow. TensorFlow has two built-in functions that are used to implement non-max suppression (so you don't actually need to use your iou() implementation):
- tf.image.non_max_suppression()
- K.gather()
End of explanation
# GRADED FUNCTION: yolo_eval
def yolo_eval(yolo_outputs, image_shape = (720., 1280.), max_boxes=10, score_threshold=.6, iou_threshold=.5):
Converts the output of YOLO encoding (a lot of boxes) to your predicted boxes along with their scores, box coordinates and classes.
Arguments:
yolo_outputs -- output of the encoding model (for image_shape of (608, 608, 3)), contains 4 tensors:
box_confidence: tensor of shape (None, 19, 19, 5, 1)
box_xy: tensor of shape (None, 19, 19, 5, 2)
box_wh: tensor of shape (None, 19, 19, 5, 2)
box_class_probs: tensor of shape (None, 19, 19, 5, 80)
image_shape -- tensor of shape (2,) containing the input shape, in this notebook we use (608., 608.) (has to be float32 dtype)
max_boxes -- integer, maximum number of predicted boxes you'd like
score_threshold -- real value, if [ highest class probability score < threshold], then get rid of the corresponding box
iou_threshold -- real value, "intersection over union" threshold used for NMS filtering
Returns:
scores -- tensor of shape (None, ), predicted score for each box
boxes -- tensor of shape (None, 4), predicted box coordinates
classes -- tensor of shape (None,), predicted class for each box
### START CODE HERE ###
# Retrieve outputs of the YOLO model (≈1 line)
box_confidence, box_xy, box_wh, box_class_probs = yolo_outputs
# Convert boxes to be ready for filtering functions
boxes = yolo_boxes_to_corners(box_xy, box_wh)
# Use one of the functions you've implemented to perform Score-filtering with a threshold of score_threshold (≈1 line)
scores, boxes, classes = yolo_filter_boxes(box_confidence, boxes, box_class_probs, score_threshold)
# Scale boxes back to original image shape.
boxes = scale_boxes(boxes, image_shape)
# Use one of the functions you've implemented to perform Non-max suppression with a threshold of iou_threshold (≈1 line)
scores, boxes, classes = yolo_non_max_suppression(scores, boxes, classes, max_boxes, iou_threshold)
### END CODE HERE ###
return scores, boxes, classes
with tf.Session() as test_b:
yolo_outputs = (tf.random_normal([19, 19, 5, 1], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 2], mean=1, stddev=4, seed = 1),
tf.random_normal([19, 19, 5, 80], mean=1, stddev=4, seed = 1))
scores, boxes, classes = yolo_eval(yolo_outputs)
print("scores[2] = " + str(scores[2].eval()))
print("boxes[2] = " + str(boxes[2].eval()))
print("classes[2] = " + str(classes[2].eval()))
print("scores.shape = " + str(scores.eval().shape))
print("boxes.shape = " + str(boxes.eval().shape))
print("classes.shape = " + str(classes.eval().shape))
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
6.9384
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[-5.299932 3.13798141 4.45036697 0.95942086]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
-2.24527
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
2.4 Wrapping up the filtering
It's time to implement a function taking the output of the deep CNN (the 19x19x5x85 dimensional encoding) and filtering through all the boxes using the functions you've just implemented.
Exercise: Implement yolo_eval() which takes the output of the YOLO encoding and filters the boxes using score threshold and NMS. There's just one last implementational detail you have to know. There're a few ways of representing boxes, such as via their corners or via their midpoint and height/width. YOLO converts between a few such formats at different times, using the following functions (which we have provided):
python
boxes = yolo_boxes_to_corners(box_xy, box_wh)
which converts the yolo box coordinates (x,y,w,h) to box corners' coordinates (x1, y1, x2, y2) to fit the input of yolo_filter_boxes
python
boxes = scale_boxes(boxes, image_shape)
YOLO's network was trained to run on 608x608 images. If you are testing this data on a different size image--for example, the car detection dataset had 720x1280 images--this step rescales the boxes so that they can be plotted on top of the original 720x1280 image.
Don't worry about these two functions; we'll show you where they need to be called.
End of explanation
sess = K.get_session()
Explanation: Expected Output:
<table>
<tr>
<td>
**scores[2]**
</td>
<td>
138.791
</td>
</tr>
<tr>
<td>
**boxes[2]**
</td>
<td>
[ 1292.32971191 -278.52166748 3876.98925781 -835.56494141]
</td>
</tr>
<tr>
<td>
**classes[2]**
</td>
<td>
54
</td>
</tr>
<tr>
<td>
**scores.shape**
</td>
<td>
(10,)
</td>
</tr>
<tr>
<td>
**boxes.shape**
</td>
<td>
(10, 4)
</td>
</tr>
<tr>
<td>
**classes.shape**
</td>
<td>
(10,)
</td>
</tr>
</table>
<font color='blue'>
Summary for YOLO:
- Input image (608, 608, 3)
- The input image goes through a CNN, resulting in a (19,19,5,85) dimensional output.
- After flattening the last two dimensions, the output is a volume of shape (19, 19, 425):
- Each cell in a 19x19 grid over the input image gives 425 numbers.
- 425 = 5 x 85 because each cell contains predictions for 5 boxes, corresponding to 5 anchor boxes, as seen in lecture.
- 85 = 5 + 80 where 5 is because $(p_c, b_x, b_y, b_h, b_w)$ has 5 numbers, and and 80 is the number of classes we'd like to detect
- You then select only few boxes based on:
- Score-thresholding: throw away boxes that have detected a class with a score less than the threshold
- Non-max suppression: Compute the Intersection over Union and avoid selecting overlapping boxes
- This gives you YOLO's final output.
3 - Test YOLO pretrained model on images
In this part, you are going to use a pretrained model and test it on the car detection dataset. As usual, you start by creating a session to start your graph. Run the following cell.
End of explanation
class_names = read_classes("model_data/coco_classes.txt")
anchors = read_anchors("model_data/yolo_anchors.txt")
image_shape = (720., 1280.)
Explanation: 3.1 - Defining classes, anchors and image shape.
Recall that we are trying to detect 80 classes, and are using 5 anchor boxes. We have gathered the information about the 80 classes and 5 boxes in two files "coco_classes.txt" and "yolo_anchors.txt". Let's load these quantities into the model by running the next cell.
The car detection dataset has 720x1280 images, which we've pre-processed into 608x608 images.
End of explanation
yolo_model = load_model("model_data/yolo.h5")
Explanation: 3.2 - Loading a pretrained model
Training a YOLO model takes a very long time and requires a fairly large dataset of labelled bounding boxes for a large range of target classes. You are going to load an existing pretrained Keras YOLO model stored in "yolo.h5". (These weights come from the official YOLO website, and were converted using a function written by Allan Zelener. References are at the end of this notebook. Technically, these are the parameters from the "YOLOv2" model, but we will more simply refer to it as "YOLO" in this notebook.) Run the cell below to load the model from this file.
End of explanation
yolo_model.summary()
Explanation: This loads the weights of a trained YOLO model. Here's a summary of the layers your model contains.
End of explanation
yolo_outputs = yolo_head(yolo_model.output, anchors, len(class_names))
Explanation: Note: On some computers, you may see a warning message from Keras. Don't worry about it if you do--it is fine.
Reminder: this model converts a preprocessed batch of input images (shape: (m, 608, 608, 3)) into a tensor of shape (m, 19, 19, 5, 85) as explained in Figure (2).
3.3 - Convert output of the model to usable bounding box tensors
The output of yolo_model is a (m, 19, 19, 5, 85) tensor that needs to pass through non-trivial processing and conversion. The following cell does that for you.
End of explanation
scores, boxes, classes = yolo_eval(yolo_outputs, image_shape)
Explanation: You added yolo_outputs to your graph. This set of 4 tensors is ready to be used as input by your yolo_eval function.
3.4 - Filtering boxes
yolo_outputs gave you all the predicted boxes of yolo_model in the correct format. You're now ready to perform filtering and select only the best boxes. Lets now call yolo_eval, which you had previously implemented, to do this.
End of explanation
def predict(sess, image_file):
Runs the graph stored in "sess" to predict boxes for "image_file". Prints and plots the preditions.
Arguments:
sess -- your tensorflow/Keras session containing the YOLO graph
image_file -- name of an image stored in the "images" folder.
Returns:
out_scores -- tensor of shape (None, ), scores of the predicted boxes
out_boxes -- tensor of shape (None, 4), coordinates of the predicted boxes
out_classes -- tensor of shape (None, ), class index of the predicted boxes
Note: "None" actually represents the number of predicted boxes, it varies between 0 and max_boxes.
# Preprocess your image
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
# Run the session with the correct tensors and choose the correct placeholders in the feed_dict.
# You'll need to use feed_dict={yolo_model.input: ... , K.learning_phase(): 0})
### START CODE HERE ### (≈ 1 line)
out_scores, out_boxes, out_classes = sess.run([scores, boxes, classes], feed_dict = {yolo_model.input: image_data, K.learning_phase(): 0} )
### END CODE HERE ###
# Print predictions info
print('Found {} boxes for {}'.format(len(out_boxes), image_file))
# Generate colors for drawing bounding boxes.
colors = generate_colors(class_names)
# Draw bounding boxes on the image file
draw_boxes(image, out_scores, out_boxes, out_classes, class_names, colors)
# Save the predicted bounding box on the image
image.save(os.path.join("out", image_file), quality=90)
# Display the results in the notebook
output_image = scipy.misc.imread(os.path.join("out", image_file))
imshow(output_image)
return out_scores, out_boxes, out_classes
Explanation: 3.5 - Run the graph on an image
Let the fun begin. You have created a (sess) graph that can be summarized as follows:
<font color='purple'> yolo_model.input </font> is given to yolo_model. The model is used to compute the output <font color='purple'> yolo_model.output </font>
<font color='purple'> yolo_model.output </font> is processed by yolo_head. It gives you <font color='purple'> yolo_outputs </font>
<font color='purple'> yolo_outputs </font> goes through a filtering function, yolo_eval. It outputs your predictions: <font color='purple'> scores, boxes, classes </font>
Exercise: Implement predict() which runs the graph to test YOLO on an image.
You will need to run a TensorFlow session, to have it compute scores, boxes, classes.
The code below also uses the following function:
python
image, image_data = preprocess_image("images/" + image_file, model_image_size = (608, 608))
which outputs:
- image: a python (PIL) representation of your image used for drawing boxes. You won't need to use it.
- image_data: a numpy-array representing the image. This will be the input to the CNN.
Important note: when a model uses BatchNorm (as is the case in YOLO), you will need to pass an additional placeholder in the feed_dict {K.learning_phase(): 0}.
End of explanation
out_scores, out_boxes, out_classes = predict(sess, "test.jpg")
Explanation: Run the following cell on the "test.jpg" image to verify that your function is correct.
End of explanation |
11,844 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Acho que esse gráfico do G1 tá errado
G1 São Paulo
Nível de água do Cantareira vai a 14,5%; todos reservatórios sobem
link da notícia
<!----->
Step3: A sabesp disponibiliza dados para consulta neste endereço, mas não faço idéia de como pegar os dados com o python...
ainda bem que uma boa alma já fez uma api que dá conta do serviço!
Step4: OK. Tudo certo. Bate com os gráficos mostrados pelo G1, apenas está sendo mostrado de uma forma diferente.
Só temos um pequeno problema aí
Step7: o cantareira tem capacidade total de quase 1 trilhão de litros, segundo a matéria do G1.
Então, entre os dias 15 e 16 de março, POOF
Step8: AAAAAAAAH, AGORA SIM! Corrigido. Agora vamos comparar o grafico com os dados usados pelo G1 e o com dados corrigidos
Step9: G1 errou 30%. errou feio, errou rude.
Estamos muito longe do nível do ano passado. E, mesmo que estivessemos com 15% da capacidade do cantareira, ainda seria uma situação crítica.
PS | Python Code:
from IPython.display import display, Image
## eis a imagem da notícia
infograficoG1 = Image('reservatorios1403.jpg')
display(infograficoG1)
Explanation: Acho que esse gráfico do G1 tá errado
G1 São Paulo
Nível de água do Cantareira vai a 14,5%; todos reservatórios sobem
link da notícia
<!----->
End of explanation
import urllib.request
req = urllib.request.urlopen("https://sabesp-api.herokuapp.com/").read().decode()
import json
data = json.loads(req)
import datetime as dt
print('dados disponibilizados pela sabesb hoje, %s \n-----' % dt.date.today())
for x in data:
print (x['name'])
for i in range(len(x['data'])):
item = x['data'][i]
print ('item %d) %35s = %s' % (i, item['key'], item['value']))
#print ( [item['value'] for item in x['data'] ])
print('-----')
## com isso posso usar list comprehension para pegar os dados que me interessam
[ (x['name'], x['data'][0]['value']) for x in data ]
import datetime as dt
# datas usadas no grafico do G1
today = dt.date(2015,3,14)
yr = dt.timedelta(days=365)
last_year = today - yr
today=today.isoformat()
last_year=last_year.isoformat()
def getData(date):
recebe um objeto date ou uma string com a data no
formato YYYY-MM-DD e retorna uma 'Série' (do pacote pandas)
com os níveis dos reservatórios da sabesp
# def parsePercent(s):
# recebe uma string no formato '\d*,\d* %' e retorna o float equivalente
# return float(s.replace(",",".").replace("%",""))
# da pra fazer com o lambda tbm, huehue
fixPercent = lambda s: float(s.replace(",",".").replace("%",""))
import datetime
if type(date) == datetime.date:
date = date.isoformat()
## requisição
import urllib.request
req = urllib.request.urlopen("https://sabesp-api.herokuapp.com/" + date).read().decode()
## transforma o json em dicionario
import json
data = json.loads(req)
## serie
dados = [ fixPercent(x['data'][0]['value']) for x in data ]
sistemas = [ x['name'] for x in data ]
import pandas as pd
return pd.Series(dados, index=sistemas, name=date)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns ## só pra deixar o matplotlib com o estilo bonitão do seaborn ;)
sns.set_context("talk")
#pd.options.display.mpl_style = 'default'
df = pd.DataFrame([getData(today), getData(last_year)]) #, index=[today, last_year])
df.T.plot(kind='bar', rot=0, figsize=(8,4))
plt.show()
df
Explanation: A sabesp disponibiliza dados para consulta neste endereço, mas não faço idéia de como pegar os dados com o python...
ainda bem que uma boa alma já fez uma api que dá conta do serviço!
End of explanation
datas = [last_year,
'2014-05-15', # pré-volume morto
'2014-05-16', # estréia da "primeira reserva técnica", a.k.a. volume morto
'2014-07-12',
'2014-10-23',
'2014-10-24', # "segunda reserva técnica" ou "VOLUME MORTO 2: ELECTRIC BOOGALOO"
'2015-01-01', # feliz ano novo ?
today]
import numpy as np
df = pd.DataFrame(pd.concat(map(getData, datas), axis=1))
df = df.T
df
def plotSideBySide(dfTupl, cm=['Spectral', 'coolwarm']):
fig, axes = plt.subplots(1,2, figsize=(17,5))
for i, ax in enumerate(axes):
dfTupl[i].ix[:].T.plot(
kind='bar', ax=ax,
rot=0, colormap=cm[i])
#ax[i].
for j in range(len(dfTupl[i].columns)):
itens = dfTupl[i].ix[:,j]
y = 0
if itens.max() > 0:
y = itens.max()
ax.text(j, y +0.5,
'$\Delta$\n{:0.1f}%'.format(itens[1] - itens[0]),
ha='center', va='bottom',
fontsize=14, color='k')
plt.show()
#%psource plotaReservatecnica
dados = df.ix[['2014-05-15','2014-05-16']], df.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados)
Explanation: OK. Tudo certo. Bate com os gráficos mostrados pelo G1, apenas está sendo mostrado de uma forma diferente.
Só temos um pequeno problema aí: esses percentuais são em relação à capacidade reservatório na data consultada. Acontece que, pelo menos para o Cantareira e o Alto Tietê, esse volume VARIA (volume morto mandou um oi).
Vejam:
End of explanation
def fixCantareira(p, data):
corrige o percentual divulgado pela sabesp
def str2date(data, format='%Y-%m-%d'):
converte uma string contendo uma data e retorna um objeto date
import datetime as dt
return dt.datetime.strptime(data,format)
vm1day = str2date('16/05/2014', format='%d/%m/%Y')
vm2day = str2date('24/10/2014', format='%d/%m/%Y')
vm1 = 182.5
vm2 = 105.4
def percReal(perc,volumeMorto=0):
a = perc/100
volMax = 982.07
volAtual = volMax*a -volumeMorto
b = 100*volAtual/volMax
b = np.round(b,1)
return b
if str2date(data) < vm1day:
print(data, p, end=' ')
perc = percReal(p)
print('===>', perc)
return perc
elif str2date(data) < vm2day:
print('primeira reserva técnica em uso', data, p, end=' ')
perc = percReal(p, volumeMorto=vm1)
print('===>', perc)
return perc
else:
print('segunda reserva técnica em uso', data, p, end=' ')
perc = percReal(p, volumeMorto=vm1+vm2)
print('===>', perc)
return perc
dFixed = df.copy()
dFixed.Cantareira = ([fixCantareira(p, dia) for p, dia in zip(df.Cantareira, df.index)])
dados = dFixed.ix[['2014-05-15','2014-05-16']], dFixed.ix[['2014-10-23','2014-10-24']]
plotSideBySide(dados)
Explanation: o cantareira tem capacidade total de quase 1 trilhão de litros, segundo a matéria do G1.
Então, entre os dias 15 e 16 de março, POOF: 180 bilhões de litros surgiram num passe de mágica!
Depois, em outubro, POOF. Surgem mais 100 bilhões.
QUE BRUXARIA É ESSA?!?
O próprio site da sabesp esclarece:
A primeira reserva técnica entrou em operação em 16/05/2014 e acrescentou mais 182,5 bilhões de litros ao sistema - 18,5% de acréscimo;
A segunda reserva técnica entrou em operação em 24/10/2014 e acrescentou mais 105,4 bilhões de litros ao sistema - 10,7% de acréscimo
Ou seja, o grafico do G1 realmente está errado. Alguém avisa os caras.
End of explanation
dias = ['2014-03-14','2015-03-14']
dados = df.ix[dias,:], dFixed.ix[dias,:]
plotSideBySide(dados,cm=[None,None])
Explanation: AAAAAAAAH, AGORA SIM! Corrigido. Agora vamos comparar o grafico com os dados usados pelo G1 e o com dados corrigidos
End of explanation
dFixed.ix[dias]
Explanation: G1 errou 30%. errou feio, errou rude.
Estamos muito longe do nível do ano passado. E, mesmo que estivessemos com 15% da capacidade do cantareira, ainda seria uma situação crítica.
PS: Ainda faltou corrigir o percentual pro Alto Tietê, que também está usando uma "reserva técnica".
End of explanation |
11,845 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
Step1: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
Step2: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
Step3: 4. Enter DV360 Segmentology Parameters
DV360 funnel analysis using Census data.
1. Wait for <b>BigQuery->->->Census_Join</b> to be created.
1. Join the <a hre='https
Step4: 5. Execute DV360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play. | Python Code:
!pip install git+https://github.com/google/starthinker
Explanation: 1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
Explanation: 2. Get Cloud Project ID
To run this recipe requires a Google Cloud Project, this only needs to be done once, then click play.
End of explanation
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
Explanation: 3. Get Client Credentials
To read and write to various endpoints requires downloading client credentials, this only needs to be done once, then click play.
End of explanation
FIELDS = {
'auth_read': 'user', # Credentials used for reading data.
'recipe_timezone': 'America/Los_Angeles', # Timezone for report dates.
'recipe_project': '', # Project ID hosting dataset.
'auth_write': 'service', # Authorization used for writing data.
'recipe_name': '', # Name of report, not needed if ID used.
'recipe_slug': '', # Name of Google BigQuery dataset to create.
'partners': [], # DV360 partner id.
'advertisers': [], # Comma delimited list of DV360 advertiser ids.
}
print("Parameters Set To: %s" % FIELDS)
Explanation: 4. Enter DV360 Segmentology Parameters
DV360 funnel analysis using Census data.
1. Wait for <b>BigQuery->->->Census_Join</b> to be created.
1. Join the <a hre='https://groups.google.com/d/forum/starthinker-assets' target='_blank'>StarThinker Assets Group</a> to access the following assets
1. Copy <a href='https://datastudio.google.com/c/u/0/reporting/3673497b-f36f-4448-8fb9-3e05ea51842f/' target='_blank'>DV360 Segmentology Sample</a>. Leave the Data Source as is, you will change it in the next step.
1. Click Edit Connection, and change to <b>BigQuery->->->Census_Join</b>.
1. Or give these intructions to the client.
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'dataset': {
'description': 'Create a dataset for bigquery tables.',
'hour': [
4
],
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
}
},
{
'bigquery': {
'auth': 'user',
'function': 'Pearson Significance Test',
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}}
}
}
},
{
'dbm': {
'auth': 'user',
'report': {
'filters': {
'FILTER_PARTNER': {
'values': {'field': {'name': 'partners','kind': 'integer_list','order': 5,'default': [],'description': 'DV360 partner id.'}}
},
'FILTER_ADVERTISER': {
'values': {'field': {'name': 'advertisers','kind': 'integer_list','order': 6,'default': [],'description': 'Comma delimited list of DV360 advertiser ids.'}}
}
},
'body': {
'timezoneCode': {'field': {'name': 'recipe_timezone','kind': 'timezone','description': 'Timezone for report dates.','default': 'America/Los_Angeles'}},
'metadata': {
'title': {'field': {'name': 'recipe_name','kind': 'string','order': 3,'prefix': 'Segmentology ','default': '','description': 'Name of report, not needed if ID used.'}},
'dataRange': 'LAST_30_DAYS',
'format': 'CSV'
},
'params': {
'type': 'TYPE_CROSS_PARTNER',
'groupBys': [
'FILTER_PARTNER',
'FILTER_PARTNER_NAME',
'FILTER_ADVERTISER',
'FILTER_ADVERTISER_NAME',
'FILTER_MEDIA_PLAN',
'FILTER_MEDIA_PLAN_NAME',
'FILTER_ZIP_POSTAL_CODE'
],
'metrics': [
'METRIC_BILLABLE_IMPRESSIONS',
'METRIC_CLICKS',
'METRIC_TOTAL_CONVERSIONS'
]
},
'schedule': {
'frequency': 'WEEKLY'
}
}
}
}
},
{
'dbm': {
'auth': 'user',
'report': {
'name': {'field': {'name': 'recipe_name','kind': 'string','order': 3,'prefix': 'Segmentology ','default': '','description': 'Name of report, not needed if ID used.'}}
},
'out': {
'bigquery': {
'auth': 'user',
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'table': 'DV360_KPI',
'header': True,
'schema': [
{
'name': 'Partner_Id',
'type': 'INTEGER',
'mode': 'REQUIRED'
},
{
'name': 'Partner',
'type': 'STRING',
'mode': 'REQUIRED'
},
{
'name': 'Advertiser_Id',
'type': 'INTEGER',
'mode': 'REQUIRED'
},
{
'name': 'Advertiser',
'type': 'STRING',
'mode': 'REQUIRED'
},
{
'name': 'Campaign_Id',
'type': 'INTEGER',
'mode': 'REQUIRED'
},
{
'name': 'Campaign',
'type': 'STRING',
'mode': 'REQUIRED'
},
{
'name': 'Zip',
'type': 'STRING',
'mode': 'NULLABLE'
},
{
'name': 'Impressions',
'type': 'FLOAT',
'mode': 'NULLABLE'
},
{
'name': 'Clicks',
'type': 'FLOAT',
'mode': 'NULLABLE'
},
{
'name': 'Conversions',
'type': 'FLOAT',
'mode': 'NULLABLE'
}
]
}
}
}
},
{
'bigquery': {
'auth': 'user',
'from': {
'query': 'SELECT Partner_Id, Partner, Advertiser_Id, Advertiser, Campaign_Id, Campaign, Zip, SAFE_DIVIDE(Impressions, SUM(Impressions) OVER(PARTITION BY Advertiser_Id)) AS Impression_Percent, SAFE_DIVIDE(Clicks, Impressions) AS Click_Percent, SAFE_DIVIDE(Conversions, Impressions) AS Conversion_Percent, Impressions AS Impressions FROM `{project}.{dataset}.DV360_KPI`; ',
'parameters': {
'project': {'field': {'name': 'recipe_project','kind': 'string','description': 'Project ID hosting dataset.'}},
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be created in BigQuery.'}}
},
'legacy': False
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','description': 'Place where tables will be written in BigQuery.'}},
'view': 'DV360_KPI_Normalized'
}
}
},
{
'census': {
'auth': 'user',
'normalize': {
'census_geography': 'zip_codes',
'census_year': '2018',
'census_span': '5yr'
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'type': 'view'
}
}
},
{
'census': {
'auth': 'user',
'correlate': {
'join': 'Zip',
'pass': [
'Partner_Id',
'Partner',
'Advertiser_Id',
'Advertiser',
'Campaign_Id',
'Campaign'
],
'sum': [
'Impressions'
],
'correlate': [
'Impression_Percent',
'Click_Percent',
'Conversion_Percent'
],
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'table': 'DV360_KPI_Normalized',
'significance': 80
},
'to': {
'dataset': {'field': {'name': 'recipe_slug','kind': 'string','order': 4,'default': '','description': 'Name of Google BigQuery dataset to create.'}},
'type': 'view'
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
Explanation: 5. Execute DV360 Segmentology
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation |
11,846 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Copyright 2020 Sen Pei (Columbia University).
Licensed under the Apache License, Version 2.0 (the "License");
Step1: Substantial Undocumented Infection Facilitates the Rapid Dissemination of Novel Coronavirus (SARS-CoV2)
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Data Import
Let's import the data from github and inspect some of it.
Step3: Below we can see the raw incidence count per day. We are most interested in the first 14 days (January 10th to January 23rd), as the travel restrictions were put in place on the 23rd. The paper deals with this by modeling Jan 10-23 and Jan 23+ separately, with different parameters; we will just restrict our reproduction to the earlier period.
Step4: Let's sanity-check the Wuhan incidence counts.
Step5: So far, so good. Now the initial population counts.
Step6: Let's also check and record which entry is Wuhan.
Step7: And here we see the mobility matrix between different cities. This is a proxy for the number of people moving between different cities on the first 14 days. It's dervied from GPS records provided by Tencent for the 2018 Lunar New Year season. Li et al model mobility during the 2020 season as some unknown (subject to inference) constant factor $\theta$ times this.
Step8: Finally, let's preprocess all this into numpy arrays that we can consume.
Step9: Convert the mobility data into an [L, L, T]-shaped Tensor, where L is the number of locations, and T is the number of timesteps.
Step10: Finally take the observed infections and make an [L, T] table.
Step11: And double-check that we got the shapes the way we wanted. As a reminder, we're working with 375 cities and 14 days.
Step12: Defining State and Parameters
Let's start defining our model. The model we are reproducing is a variant of an SEIR model. In this case we have the following time-varying states
Step13: We also code Li et al's bounds for the values of the parameters.
Step15: SEIR Dynamics
Here we define the relationship between the parameters and state.
The time-dynamics equations from Li et al (supplemental material, eqns 1-5) are as follows
Step17: Here's the integrator. This is completely standard, except for passing the PRNG seed through to the sample_state_deltas function to get independent Poisson noise at each of the partial steps that the Runge-Kutta method calls for.
Step21: Initialization
Here we implement the initialization scheme from the paper.
Following Li et al, our inference scheme will be an ensemble adjustment Kalman filter inner loop, surrounded by an iterated filtering outer loop (IF-EAKF). Computationally, that means we need three kinds of initialization
Step22: Delays
One of the important features of this model is taking explicit account of the fact that infections are reported later than they begin. That is, we expect that a person who moves from the $E$ compartment to the $I^r$ compartment on day $t$ may not show up in the observable reported case counts until some later day.
We assume the delay is gamma-distributed. Following Li et al, we use 1.85 for the shape, and parameterize the rate to produce an average reporting delay of 9 days.
Step23: Our observations are discrete, so we will round the raw (continuous) delays up to the nearest day. We also have a finite data horizon, so the delay distribution for a single person is a categorical over the remaining days. We can therefore compute the per-city predicted observations more efficiently than sampling $O(I^r)$ gammas, by pre-computing multinomial delay probabilities instead.
Step24: Here's the code for actually applying these delays to the new daily documented infectious counts
Step25: Inference
First we'll define some data structures for inference.
In particular, we'll be wanting to do Iterated Filtering, which packages the
state and parameters together while doing inference. So we'll define
a ParameterStatePair object.
We also want to package any side information to the model.
Step27: Here is the complete observation model, packaged for the Ensemble Kalman Filter.
The interesting feature is the reporting delays (computed as previously). The upstream model emits the daily_new_documented_infectious for each city at each time step.
Step29: Here we define the transition dynamics. We've done the semantic work already; here we just package it for the EAKF framework, and, following Li et al, clip city populations to prevent them from getting too small.
Step31: Finally we define the inference method. This is two loops, the outer loop
being Iterated Filtering while the inner loop is Ensemble Adjustment Kalman Filtering.
Step34: Final detail
Step35: Running it all together
Step36: The results of our inferences. We plot the maximum-likelihood values for all the global paramters to show their variation across our num_batches independent runs of inference. This corresponds to Table S1 in the supplemental materials. | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
Copyright 2020 Sen Pei (Columbia University).
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip3 install -q tf-nightly tfp-nightly
import collections
import io
import requests
import time
import zipfile
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow.compat.v2 as tf
import tensorflow_probability as tfp
from tensorflow_probability.python.internal import samplers
tfd = tfp.distributions
tfes = tfp.experimental.sequential
Explanation: Substantial Undocumented Infection Facilitates the Rapid Dissemination of Novel Coronavirus (SARS-CoV2)
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/probability/examples/Undocumented_Infection_and_the_Dissemination_of_SARS-CoV2"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Undocumented_Infection_and_the_Dissemination_of_SARS-CoV2.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/examples/jupyter_notebooks/Undocumented_Infection_and_the_Dissemination_of_SARS-CoV2.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/probability/tensorflow_probability/examples/jupyter_notebooks/Undocumented_Infection_and_the_Dissemination_of_SARS-CoV2.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This is a TensorFlow Probability port of the eponymous 16 March 2020 paper by Li et al. We faithfully reproduce the original authors' methods and results on the TensorFlow Probability platform, showcasing some of TFP's capabilities in the setting of modern epidemiology modeling. Porting to TensorFlow gives us a ~10x speedup relative to the original Matlab code, and, since TensorFlow Probability pervasively supports vectorized batch computation, also favorably scales to hundreds of independent replications.
Original paper
Ruiyun Li, Sen Pei, Bin Chen, Yimeng Song, Tao Zhang, Wan Yang, and Jeffrey Shaman. Substantial undocumented infection facilitates the rapid dissemination of novel coronavirus (SARS-CoV2). (2020), doi:
https://doi.org/10.1126/science.abb3221 .
Abstract: "Estimation of the prevalence and contagiousness of undocumented novel coronavirus (SARS-CoV2) infections is critical for understanding the overall prevalence and pandemic potential of this disease. Here we use observations of reported infection within China, in conjunction with mobility data, a networked dynamic metapopulation model and Bayesian inference, to infer critical epidemiological characteristics associated with SARS-CoV2, including the fraction of undocumented infections and their contagiousness. We estimate 86% of all infections were undocumented (95% CI: [82%–90%]) prior to 23 January 2020 travel restrictions. Per person, the transmission rate of undocumented infections was 55% of documented infections ([46%–62%]), yet, due to their greater numbers, undocumented infections were the infection source for 79% of documented cases. These findings explain the rapid geographic spread of SARS-CoV2 and indicate containment of this virus will be particularly challenging."
Github link to the code and data.
Overview
The model is a compartmental disease model, with compartments for "susceptible", "exposed" (infected but not yet infectious), "never-documented infectious", and "eventually-documented infectious". There are two noteworthy features: separate compartments for each of 375 Chinese cities, with an assumption about how people travel from one city to another; and delays in reporting infection, so that a case that becomes "eventually-documented infectious" on day $t$ doesn't show up in the observed case counts until a stochastic later day.
The model assumes that the never-documented cases end up undocumented by being milder, and thus infect others at a lower rate. The main parameter of interest in the original paper is the proportion of cases that go undocumented, to estimate both the extent of existing infection, and the impact of undocumented transmission on the spread of the disease.
This colab is structured as a code walkthrough in bottom-up style. In order, we will
- Ingest and briefly examine the data,
- Define the state space and dynamics of the model,
- Build up a suite of functions for doing inference in the model following Li et al, and
- Invoke them and examine the results. Spoiler: They come out the same as the paper.
Installation and Python Imports
End of explanation
r = requests.get('https://raw.githubusercontent.com/SenPei-CU/COVID-19/master/Data.zip')
z = zipfile.ZipFile(io.BytesIO(r.content))
z.extractall('/tmp/')
raw_incidence = pd.read_csv('/tmp/data/Incidence.csv')
raw_mobility = pd.read_csv('/tmp/data/Mobility.csv')
raw_population = pd.read_csv('/tmp/data/pop.csv')
Explanation: Data Import
Let's import the data from github and inspect some of it.
End of explanation
raw_incidence.drop('Date', axis=1) # The 'Date' column is all 1/18/21
# Luckily the days are in order, starting on January 10th, 2020.
Explanation: Below we can see the raw incidence count per day. We are most interested in the first 14 days (January 10th to January 23rd), as the travel restrictions were put in place on the 23rd. The paper deals with this by modeling Jan 10-23 and Jan 23+ separately, with different parameters; we will just restrict our reproduction to the earlier period.
End of explanation
plt.plot(raw_incidence.Wuhan, '.-')
plt.title('Wuhan incidence counts over 1/10/20 - 02/08/20')
plt.show()
Explanation: Let's sanity-check the Wuhan incidence counts.
End of explanation
raw_population
Explanation: So far, so good. Now the initial population counts.
End of explanation
raw_population['City'][169]
WUHAN_IDX = 169
Explanation: Let's also check and record which entry is Wuhan.
End of explanation
raw_mobility
Explanation: And here we see the mobility matrix between different cities. This is a proxy for the number of people moving between different cities on the first 14 days. It's dervied from GPS records provided by Tencent for the 2018 Lunar New Year season. Li et al model mobility during the 2020 season as some unknown (subject to inference) constant factor $\theta$ times this.
End of explanation
# The given populations are only "initial" because of intercity mobility during
# the holiday season.
initial_population = raw_population['Population'].to_numpy().astype(np.float32)
Explanation: Finally, let's preprocess all this into numpy arrays that we can consume.
End of explanation
daily_mobility_matrices = []
for i in range(1, 15):
day_mobility = raw_mobility[raw_mobility['Day'] == i]
# Make a matrix of daily mobilities.
z = pd.crosstab(
day_mobility.Origin,
day_mobility.Destination,
values=day_mobility['Mobility Index'], aggfunc='sum', dropna=False)
# Include every city, even if there are no rows for some in the raw data on
# some day. This uses the sort order of `raw_population`.
z = z.reindex(index=raw_population['City'], columns=raw_population['City'],
fill_value=0)
# Finally, fill any missing entries with 0. This means no mobility.
z = z.fillna(0)
daily_mobility_matrices.append(z.to_numpy())
mobility_matrix_over_time = np.stack(daily_mobility_matrices, axis=-1).astype(
np.float32)
Explanation: Convert the mobility data into an [L, L, T]-shaped Tensor, where L is the number of locations, and T is the number of timesteps.
End of explanation
# Remove the date parameter and take the first 14 days.
observed_daily_infectious_count = raw_incidence.to_numpy()[:14, 1:]
observed_daily_infectious_count = np.transpose(
observed_daily_infectious_count).astype(np.float32)
Explanation: Finally take the observed infections and make an [L, T] table.
End of explanation
print('Mobility Matrix over time should have shape (375, 375, 14): {}'.format(
mobility_matrix_over_time.shape))
print('Observed Infectious should have shape (375, 14): {}'.format(
observed_daily_infectious_count.shape))
print('Initial population should have shape (375): {}'.format(
initial_population.shape))
Explanation: And double-check that we got the shapes the way we wanted. As a reminder, we're working with 375 cities and 14 days.
End of explanation
SEIRComponents = collections.namedtuple(
typename='SEIRComponents',
field_names=[
'susceptible', # S
'exposed', # E
'documented_infectious', # I^r
'undocumented_infectious', # I^u
# This is the count of new cases in the "documented infectious" compartment.
# We need this because we will introduce a reporting delay, between a person
# entering I^r and showing up in the observable case count data.
# This can't be computed from the cumulative `documented_infectious` count,
# because some portion of that population will move to the 'recovered'
# state, which we aren't tracking explicitly.
'daily_new_documented_infectious'])
ModelParams = collections.namedtuple(
typename='ModelParams',
field_names=[
'documented_infectious_tx_rate', # Beta
'undocumented_infectious_tx_relative_rate', # Mu
'intercity_underreporting_factor', # Theta
'average_latency_period', # Z
'fraction_of_documented_infections', # Alpha
'average_infection_duration' # D
]
)
Explanation: Defining State and Parameters
Let's start defining our model. The model we are reproducing is a variant of an SEIR model. In this case we have the following time-varying states:
* $S$: Number of people susceptible to the disease in each city.
* $E$: Number of people in each city exposed to the disease but not infectious yet. Biologically, this corresponds to contracting the disease, in that all exposed people eventually become infectious.
* $I^u$: Number of people in each city who are infectious but undocumented. In the model, this actually means "will never be documented".
* $I^r$: Number of people in each city who are infectious and documented as such. Li et al model reporting delays, so $I^r$ actually corresponds to something like "case is severe enough to be documented at some point in the future".
As we will see below, we will be inferring these states by running an Ensemble-adjusted Kalman Filter (EAKF) forward in time. The state vector of the EAKF is one city-indexed vector for each of these quantities.
The model has the following inferrable global, time-invariant parameters:
$\beta$: The transmission rate due to documented-infectious individuals.
$\mu$: The relative transmission rate due to undocumented-infectious
individuals. This will act through the product $\mu \beta$.
$\theta$: The intercity mobility factor. This is a factor greater than
1 correcting for underreporting of mobility data (and for population growth
from 2018 to 2020).
$Z$: The average incubation period (i.e., time in the "exposed" state).
$\alpha$: This is the fraction of infections severe enough to be (eventually) documented.
$D$: The average duration of infections (i.e., time in either "infectious" state).
We will be inferring point estimates for these parameters with an Iterative-Filtering loop around the EAKF for the states.
The model also depends on un-inferred constants:
* $M$: The intercity mobility matrix. This is time-varying and presumed given. Recall that it's scaled by the inferred parameter $\theta$ to give the actual population movements between cities.
* $N$: The total number of people in each city. The initial populations are taken as given, and the time-variation of population is computed from the mobility numbers $\theta M$.
First, we give ourselves some data structures for holding our states and parameters.
End of explanation
PARAMETER_LOWER_BOUNDS = ModelParams(
documented_infectious_tx_rate=0.8,
undocumented_infectious_tx_relative_rate=0.2,
intercity_underreporting_factor=1.,
average_latency_period=2.,
fraction_of_documented_infections=0.02,
average_infection_duration=2.
)
PARAMETER_UPPER_BOUNDS = ModelParams(
documented_infectious_tx_rate=1.5,
undocumented_infectious_tx_relative_rate=1.,
intercity_underreporting_factor=1.75,
average_latency_period=5.,
fraction_of_documented_infections=1.,
average_infection_duration=5.
)
Explanation: We also code Li et al's bounds for the values of the parameters.
End of explanation
def sample_state_deltas(
state, population, mobility_matrix, params, seed, is_deterministic=False):
Computes one-step change in state, including Poisson sampling.
Note that this is coded to support vectorized evaluation on arbitrary-shape
batches of states. This is useful, for example, for running multiple
independent replicas of this model to compute credible intervals for the
parameters. We refer to the arbitrary batch shape with the conventional
`B` in the parameter documentation below. This function also, of course,
supports broadcasting over the batch shape.
Args:
state: A `SEIRComponents` tuple with fields Tensors of shape
B + [num_locations] giving the current disease state.
population: A Tensor of shape B + [num_locations] giving the current city
populations.
mobility_matrix: A Tensor of shape B + [num_locations, num_locations] giving
the current baseline inter-city mobility.
params: A `ModelParams` tuple with fields Tensors of shape B giving the
global parameters for the current EAKF run.
seed: Initial entropy for pseudo-random number generation. The Poisson
sampling is repeatable by supplying the same seed.
is_deterministic: A `bool` flag to turn off Poisson sampling if desired.
Returns:
delta: A `SEIRComponents` tuple with fields Tensors of shape
B + [num_locations] giving the one-day changes in the state, according
to equations 1-4 above (including Poisson noise per Li et al).
undocumented_infectious_fraction = state.undocumented_infectious / population
documented_infectious_fraction = state.documented_infectious / population
# Anyone not documented as infectious is considered mobile
mobile_population = (population - state.documented_infectious)
def compute_outflow(compartment_population):
raw_mobility = tf.linalg.matvec(
mobility_matrix, compartment_population / mobile_population)
return params.intercity_underreporting_factor * raw_mobility
def compute_inflow(compartment_population):
raw_mobility = tf.linalg.matmul(
mobility_matrix,
(compartment_population / mobile_population)[..., tf.newaxis],
transpose_a=True)
return params.intercity_underreporting_factor * tf.squeeze(
raw_mobility, axis=-1)
# Helper for sampling the Poisson-variate terms.
seeds = samplers.split_seed(seed, n=11)
if is_deterministic:
def sample_poisson(rate):
return rate
else:
def sample_poisson(rate):
return tfd.Poisson(rate=rate).sample(seed=seeds.pop())
# Below are the various terms called U1-U12 in the paper. We combined the
# first two, which should be fine; both are poisson so their sum is too, and
# there's no risk (as there could be in other terms) of going negative.
susceptible_becoming_exposed = sample_poisson(
state.susceptible *
(params.documented_infectious_tx_rate *
documented_infectious_fraction +
(params.undocumented_infectious_tx_relative_rate *
params.documented_infectious_tx_rate) *
undocumented_infectious_fraction)) # U1 + U2
susceptible_population_inflow = sample_poisson(
compute_inflow(state.susceptible)) # U3
susceptible_population_outflow = sample_poisson(
compute_outflow(state.susceptible)) # U4
exposed_becoming_documented_infectious = sample_poisson(
params.fraction_of_documented_infections *
state.exposed / params.average_latency_period) # U5
exposed_becoming_undocumented_infectious = sample_poisson(
(1 - params.fraction_of_documented_infections) *
state.exposed / params.average_latency_period) # U6
exposed_population_inflow = sample_poisson(
compute_inflow(state.exposed)) # U7
exposed_population_outflow = sample_poisson(
compute_outflow(state.exposed)) # U8
documented_infectious_becoming_recovered = sample_poisson(
state.documented_infectious /
params.average_infection_duration) # U9
undocumented_infectious_becoming_recovered = sample_poisson(
state.undocumented_infectious /
params.average_infection_duration) # U10
undocumented_infectious_population_inflow = sample_poisson(
compute_inflow(state.undocumented_infectious)) # U11
undocumented_infectious_population_outflow = sample_poisson(
compute_outflow(state.undocumented_infectious)) # U12
# The final state_deltas
return SEIRComponents(
# Equation [1]
susceptible=(-susceptible_becoming_exposed +
susceptible_population_inflow +
-susceptible_population_outflow),
# Equation [2]
exposed=(susceptible_becoming_exposed +
-exposed_becoming_documented_infectious +
-exposed_becoming_undocumented_infectious +
exposed_population_inflow +
-exposed_population_outflow),
# Equation [3]
documented_infectious=(
exposed_becoming_documented_infectious +
-documented_infectious_becoming_recovered),
# Equation [4]
undocumented_infectious=(
exposed_becoming_undocumented_infectious +
-undocumented_infectious_becoming_recovered +
undocumented_infectious_population_inflow +
-undocumented_infectious_population_outflow),
# New to-be-documented infectious cases, subject to the delayed
# observation model.
daily_new_documented_infectious=exposed_becoming_documented_infectious)
Explanation: SEIR Dynamics
Here we define the relationship between the parameters and state.
The time-dynamics equations from Li et al (supplemental material, eqns 1-5) are as follows:
$\frac{dS_i}{dt} = -\beta \frac{S_i I_i^r}{N_i} - \mu \beta \frac{S_i I_i^u}{N_i} + \theta \sum_k \frac{M_{ij} S_j}{N_j - I_j^r} - + \theta \sum_k \frac{M_{ji} S_j}{N_i - I_i^r}$
$\frac{dE_i}{dt} = \beta \frac{S_i I_i^r}{N_i} + \mu \beta \frac{S_i I_i^u}{N_i} -\frac{E_i}{Z} + \theta \sum_k \frac{M_{ij} E_j}{N_j - I_j^r} - + \theta \sum_k \frac{M_{ji} E_j}{N_i - I_i^r}$
$\frac{dI^r_i}{dt} = \alpha \frac{E_i}{Z} - \frac{I_i^r}{D}$
$\frac{dI^u_i}{dt} = (1 - \alpha) \frac{E_i}{Z} - \frac{I_i^u}{D} + \theta \sum_k \frac{M_{ij} I_j^u}{N_j - I_j^r} - + \theta \sum_k \frac{M_{ji} I^u_j}{N_i - I_i^r}$
$N_i = N_i + \theta \sum_j M_{ij} - \theta \sum_j M_{ji}$
As a reminder, the $i$ and $j$ subscripts index cities. These equations model the time-evolution of the disease through
- Contact with infectious individuals leading to more infection;
- Disease progression from "exposed" to one of the "infectious" states;
- Disease progression from "infectious" states to recovery, which we model by removal from the modeled population;
- Inter-city mobility, including exposed or undocumented-infectious persons; and
- Time-variation of daily city populations through inter-city mobility.
Following Li et al, we assume that people with cases severe enough to eventually be reported do not travel between cities.
Also following Li et al, we treat these dynamics as subject to term-wise Poisson noise, i.e., each term is actually the rate of a Poisson, a sample from which gives the true change. The Poisson noise is term-wise because subtracting (as opposed to adding) Poisson samples does not yield a Poisson-distributed result.
We will evolve these dynamics forward in time with the classic fourth-order Runge-Kutta integrator, but first let's define the function that computes them (including sampling the Poisson noise).
End of explanation
@tf.function(autograph=False)
def rk4_one_step(state, population, mobility_matrix, params, seed):
Implement one step of RK4, wrapped around a call to sample_state_deltas.
# One seed for each RK sub-step
seeds = samplers.split_seed(seed, n=4)
deltas = tf.nest.map_structure(tf.zeros_like, state)
combined_deltas = tf.nest.map_structure(tf.zeros_like, state)
for a, b in zip([1., 2, 2, 1.], [6., 3., 3., 6.]):
next_input = tf.nest.map_structure(
lambda x, delta, a=a: x + delta / a, state, deltas)
deltas = sample_state_deltas(
next_input,
population,
mobility_matrix,
params,
seed=seeds.pop(), is_deterministic=False)
combined_deltas = tf.nest.map_structure(
lambda x, delta, b=b: x + delta / b, combined_deltas, deltas)
return tf.nest.map_structure(
lambda s, delta: s + tf.round(delta),
state, combined_deltas)
Explanation: Here's the integrator. This is completely standard, except for passing the PRNG seed through to the sample_state_deltas function to get independent Poisson noise at each of the partial steps that the Runge-Kutta method calls for.
End of explanation
def initialize_state(num_particles, num_batches, seed):
Initialize the state for a batch of EAKF runs.
Args:
num_particles: `int` giving the number of particles for the EAKF.
num_batches: `int` giving the number of independent EAKF runs to
initialize in a vectorized batch.
seed: PRNG entropy.
Returns:
state: A `SEIRComponents` tuple with Tensors of shape [num_particles,
num_batches, num_cities] giving the initial conditions in each
city, in each filter particle, in each batch member.
num_cities = mobility_matrix_over_time.shape[-2]
state_shape = [num_particles, num_batches, num_cities]
susceptible = initial_population * np.ones(state_shape, dtype=np.float32)
documented_infectious = np.zeros(state_shape, dtype=np.float32)
daily_new_documented_infectious = np.zeros(state_shape, dtype=np.float32)
# Following Li et al, initialize Wuhan with up to 2000 people exposed
# and another up to 2000 undocumented infectious.
rng = np.random.RandomState(seed[0] % (2**31 - 1))
wuhan_exposed = rng.randint(
0, 2001, [num_particles, num_batches]).astype(np.float32)
wuhan_undocumented_infectious = rng.randint(
0, 2001, [num_particles, num_batches]).astype(np.float32)
# Also following Li et al, initialize cities adjacent to Wuhan with three
# days' worth of additional exposed and undocumented-infectious cases,
# as they may have traveled there before the beginning of the modeling
# period.
exposed = 3 * mobility_matrix_over_time[
WUHAN_IDX, :, 0] * wuhan_exposed[
..., np.newaxis] / initial_population[WUHAN_IDX]
undocumented_infectious = 3 * mobility_matrix_over_time[
WUHAN_IDX, :, 0] * wuhan_undocumented_infectious[
..., np.newaxis] / initial_population[WUHAN_IDX]
exposed[..., WUHAN_IDX] = wuhan_exposed
undocumented_infectious[..., WUHAN_IDX] = wuhan_undocumented_infectious
# Following Li et al, we do not remove the initial exposed and infectious
# persons from the susceptible population.
return SEIRComponents(
susceptible=tf.constant(susceptible),
exposed=tf.constant(exposed),
documented_infectious=tf.constant(documented_infectious),
undocumented_infectious=tf.constant(undocumented_infectious),
daily_new_documented_infectious=tf.constant(daily_new_documented_infectious))
def initialize_params(num_particles, num_batches, seed):
Initialize the global parameters for the entire inference run.
Args:
num_particles: `int` giving the number of particles for the EAKF.
num_batches: `int` giving the number of independent EAKF runs to
initialize in a vectorized batch.
seed: PRNG entropy.
Returns:
params: A `ModelParams` tuple with fields Tensors of shape
[num_particles, num_batches] giving the global parameters
to use for the first batch of EAKF runs.
# We have 6 parameters. We'll initialize with a Sobol sequence,
# covering the hyper-rectangle defined by our parameter limits.
halton_sequence = tfp.mcmc.sample_halton_sequence(
dim=6, num_results=num_particles * num_batches, seed=seed)
halton_sequence = tf.reshape(
halton_sequence, [num_particles, num_batches, 6])
halton_sequences = tf.nest.pack_sequence_as(
PARAMETER_LOWER_BOUNDS, tf.split(
halton_sequence, num_or_size_splits=6, axis=-1))
def interpolate(minval, maxval, h):
return (maxval - minval) * h + minval
return tf.nest.map_structure(
interpolate,
PARAMETER_LOWER_BOUNDS, PARAMETER_UPPER_BOUNDS, halton_sequences)
def update_params(num_particles, num_batches,
prev_params, parameter_variance, seed):
Update the global parameters between EAKF runs.
Args:
num_particles: `int` giving the number of particles for the EAKF.
num_batches: `int` giving the number of independent EAKF runs to
initialize in a vectorized batch.
prev_params: A `ModelParams` tuple of the parameters used for the previous
EAKF run.
parameter_variance: A `ModelParams` tuple specifying how much to drift
each parameter.
seed: PRNG entropy.
Returns:
params: A `ModelParams` tuple with fields Tensors of shape
[num_particles, num_batches] giving the global parameters
to use for the next batch of EAKF runs.
# Initialize near the previous set of parameters. This is the first step
# in Iterated Filtering.
seeds = tf.nest.pack_sequence_as(
prev_params, samplers.split_seed(seed, n=len(prev_params)))
return tf.nest.map_structure(
lambda x, v, seed: x + tf.math.sqrt(v) * tf.random.stateless_normal([
num_particles, num_batches, 1], seed=seed),
prev_params, parameter_variance, seeds)
Explanation: Initialization
Here we implement the initialization scheme from the paper.
Following Li et al, our inference scheme will be an ensemble adjustment Kalman filter inner loop, surrounded by an iterated filtering outer loop (IF-EAKF). Computationally, that means we need three kinds of initialization:
- Initial state for the inner EAKF
- Initial parameters for the outer IF, which are also the initial
parameters for the first EAKF
- Updating parameters from one IF iteration to the next, which serve
as the initial parameters for each EAKF other than the first.
End of explanation
def raw_reporting_delay_distribution(gamma_shape=1.85, reporting_delay=9.):
return tfp.distributions.Gamma(
concentration=gamma_shape, rate=gamma_shape / reporting_delay)
Explanation: Delays
One of the important features of this model is taking explicit account of the fact that infections are reported later than they begin. That is, we expect that a person who moves from the $E$ compartment to the $I^r$ compartment on day $t$ may not show up in the observable reported case counts until some later day.
We assume the delay is gamma-distributed. Following Li et al, we use 1.85 for the shape, and parameterize the rate to produce an average reporting delay of 9 days.
End of explanation
def reporting_delay_probs(num_timesteps, gamma_shape=1.85, reporting_delay=9.):
gamma_dist = raw_reporting_delay_distribution(gamma_shape, reporting_delay)
multinomial_probs = [gamma_dist.cdf(1.)]
for k in range(2, num_timesteps + 1):
multinomial_probs.append(gamma_dist.cdf(k) - gamma_dist.cdf(k - 1))
# For samples that are larger than T.
multinomial_probs.append(gamma_dist.survival_function(num_timesteps))
multinomial_probs = tf.stack(multinomial_probs)
return multinomial_probs
Explanation: Our observations are discrete, so we will round the raw (continuous) delays up to the nearest day. We also have a finite data horizon, so the delay distribution for a single person is a categorical over the remaining days. We can therefore compute the per-city predicted observations more efficiently than sampling $O(I^r)$ gammas, by pre-computing multinomial delay probabilities instead.
End of explanation
def delay_reporting(
daily_new_documented_infectious, num_timesteps, t, multinomial_probs, seed):
# This is the distribution of observed infectious counts from the current
# timestep.
raw_delays = tfd.Multinomial(
total_count=daily_new_documented_infectious,
probs=multinomial_probs).sample(seed=seed)
# The last bucket is used for samples that are out of range of T + 1. Thus
# they are not going to be observable in this model.
clipped_delays = raw_delays[..., :-1]
# We can also remove counts that are such that t + i >= T.
clipped_delays = clipped_delays[..., :num_timesteps - t]
# We finally shift everything by t. That means prepending with zeros.
return tf.concat([
tf.zeros(
tf.concat([
tf.shape(clipped_delays)[:-1], [t]], axis=0),
dtype=clipped_delays.dtype),
clipped_delays], axis=-1)
Explanation: Here's the code for actually applying these delays to the new daily documented infectious counts:
End of explanation
ParameterStatePair = collections.namedtuple(
'ParameterStatePair', ['state', 'params'])
# Info that is tracked and mutated but should not have inference performed over.
SideInfo = collections.namedtuple(
'SideInfo', [
# Observations at every time step.
'observations_over_time',
'initial_population',
'mobility_matrix_over_time',
'population',
# Used for variance of measured observations.
'actual_reported_cases',
# Pre-computed buckets for the multinomial distribution.
'multinomial_probs',
'seed',
])
# Cities can not fall below this fraction of people
MINIMUM_CITY_FRACTION = 0.6
# How much to inflate the covariance by.
INFLATION_FACTOR = 1.1
INFLATE_FN = tfes.inflate_by_scaled_identity_fn(INFLATION_FACTOR)
Explanation: Inference
First we'll define some data structures for inference.
In particular, we'll be wanting to do Iterated Filtering, which packages the
state and parameters together while doing inference. So we'll define
a ParameterStatePair object.
We also want to package any side information to the model.
End of explanation
# We observe the observed infections.
def observation_fn(t, state_params, extra):
Generate reported cases.
Args:
state_params: A `ParameterStatePair` giving the current parameters
and state.
t: Integer giving the current time.
extra: A `SideInfo` carrying auxiliary information.
Returns:
observations: A Tensor of predicted observables, namely new cases
per city at time `t`.
extra: Update `SideInfo`.
# Undo padding introduced in `inference`.
daily_new_documented_infectious = state_params.state.daily_new_documented_infectious[..., 0]
# Number of people that we have already committed to become
# observed infectious over time.
# shape: batch + [num_particles, num_cities, time]
observations_over_time = extra.observations_over_time
num_timesteps = observations_over_time.shape[-1]
seed, new_seed = samplers.split_seed(extra.seed, salt='reporting delay')
daily_delayed_counts = delay_reporting(
daily_new_documented_infectious, num_timesteps, t,
extra.multinomial_probs, seed)
observations_over_time = observations_over_time + daily_delayed_counts
extra = extra._replace(
observations_over_time=observations_over_time,
seed=new_seed)
# Actual predicted new cases, re-padded.
adjusted_observations = observations_over_time[..., t][..., tf.newaxis]
# Finally observations have variance that is a function of the true observations:
return tfd.MultivariateNormalDiag(
loc=adjusted_observations,
scale_diag=tf.math.maximum(
2., extra.actual_reported_cases[..., t][..., tf.newaxis] / 2.)), extra
Explanation: Here is the complete observation model, packaged for the Ensemble Kalman Filter.
The interesting feature is the reporting delays (computed as previously). The upstream model emits the daily_new_documented_infectious for each city at each time step.
End of explanation
def transition_fn(t, state_params, extra):
SEIR dynamics.
Args:
state_params: A `ParameterStatePair` giving the current parameters
and state.
t: Integer giving the current time.
extra: A `SideInfo` carrying auxiliary information.
Returns:
state_params: A `ParameterStatePair` predicted for the next time step.
extra: Updated `SideInfo`.
mobility_t = extra.mobility_matrix_over_time[..., t]
new_seed, rk4_seed = samplers.split_seed(extra.seed, salt='Transition')
new_state = rk4_one_step(
state_params.state,
extra.population,
mobility_t,
state_params.params,
seed=rk4_seed)
# Make sure population doesn't go below MINIMUM_CITY_FRACTION.
new_population = (
extra.population + state_params.params.intercity_underreporting_factor * (
# Inflow
tf.reduce_sum(mobility_t, axis=-2) -
# Outflow
tf.reduce_sum(mobility_t, axis=-1)))
new_population = tf.where(
new_population < MINIMUM_CITY_FRACTION * extra.initial_population,
extra.initial_population * MINIMUM_CITY_FRACTION,
new_population)
extra = extra._replace(population=new_population, seed=new_seed)
# The Ensemble Kalman Filter code expects the transition function to return a distribution.
# As the dynamics and noise are encapsulated above, we construct a `JointDistribution` that when
# sampled, returns the values above.
new_state = tfd.JointDistributionNamed(
model=tf.nest.map_structure(lambda x: tfd.VectorDeterministic(x), new_state))
params = tfd.JointDistributionNamed(
model=tf.nest.map_structure(lambda x: tfd.VectorDeterministic(x), state_params.params))
state_params = tfd.JointDistributionNamed(
model=ParameterStatePair(state=new_state, params=params))
return state_params, extra
Explanation: Here we define the transition dynamics. We've done the semantic work already; here we just package it for the EAKF framework, and, following Li et al, clip city populations to prevent them from getting too small.
End of explanation
# Use tf.function to speed up EAKF prediction and updates.
ensemble_kalman_filter_predict = tf.function(
tfes.ensemble_kalman_filter_predict, autograph=False)
ensemble_adjustment_kalman_filter_update = tf.function(
tfes.ensemble_adjustment_kalman_filter_update, autograph=False)
def inference(
num_ensembles,
num_batches,
num_iterations,
actual_reported_cases,
mobility_matrix_over_time,
seed=None,
# This is how much to reduce the variance by in every iterative
# filtering step.
variance_shrinkage_factor=0.9,
# Days before infection is reported.
reporting_delay=9.,
# Shape parameter of Gamma distribution.
gamma_shape_parameter=1.85):
Inference for the Shaman, et al. model.
Args:
num_ensembles: Number of particles to use for EAKF.
num_batches: Number of batches of IF-EAKF to run.
num_iterations: Number of iterations to run iterative filtering.
actual_reported_cases: `Tensor` of shape `[L, T]` where `L` is the number
of cities, and `T` is the timesteps.
mobility_matrix_over_time: `Tensor` of shape `[L, L, T]` which specifies the
mobility between locations over time.
variance_shrinkage_factor: Python `float`. How much to reduce the
variance each iteration of iterated filtering.
reporting_delay: Python `float`. How many days before the infection
is reported.
gamma_shape_parameter: Python `float`. Shape parameter of Gamma distribution
of reporting delays.
Returns:
result: A `ModelParams` with fields Tensors of shape [num_batches],
containing the inferred parameters at the final iteration.
print('Starting inference.')
num_timesteps = actual_reported_cases.shape[-1]
params_per_iter = []
multinomial_probs = reporting_delay_probs(
num_timesteps, gamma_shape_parameter, reporting_delay)
seed = samplers.sanitize_seed(seed, salt='Inference')
for i in range(num_iterations):
start_if_time = time.time()
seeds = samplers.split_seed(seed, n=4, salt='Initialize')
if params_per_iter:
parameter_variance = tf.nest.map_structure(
lambda minval, maxval: variance_shrinkage_factor ** (
2 * i) * (maxval - minval) ** 2 / 4.,
PARAMETER_LOWER_BOUNDS, PARAMETER_UPPER_BOUNDS)
params_t = update_params(
num_ensembles,
num_batches,
prev_params=params_per_iter[-1],
parameter_variance=parameter_variance,
seed=seeds.pop())
else:
params_t = initialize_params(num_ensembles, num_batches, seed=seeds.pop())
state_t = initialize_state(num_ensembles, num_batches, seed=seeds.pop())
population_t = sum(x for x in state_t)
observations_over_time = tf.zeros(
[num_ensembles,
num_batches,
actual_reported_cases.shape[0], num_timesteps])
extra = SideInfo(
observations_over_time=observations_over_time,
initial_population=tf.identity(population_t),
mobility_matrix_over_time=mobility_matrix_over_time,
population=population_t,
multinomial_probs=multinomial_probs,
actual_reported_cases=actual_reported_cases,
seed=seeds.pop())
# Clip states
state_t = clip_state(state_t, population_t)
params_t = clip_params(params_t, seed=seeds.pop())
# Accrue the parameter over time. We'll be averaging that
# and using that as our MLE estimate.
params_over_time = tf.nest.map_structure(
lambda x: tf.identity(x), params_t)
state_params = ParameterStatePair(state=state_t, params=params_t)
eakf_state = tfes.EnsembleKalmanFilterState(
step=tf.constant(0), particles=state_params, extra=extra)
for j in range(num_timesteps):
seeds = samplers.split_seed(eakf_state.extra.seed, n=3)
extra = extra._replace(seed=seeds.pop())
# Predict step.
# Inflate and clip.
new_particles = INFLATE_FN(eakf_state.particles)
state_t = clip_state(new_particles.state, eakf_state.extra.population)
params_t = clip_params(new_particles.params, seed=seeds.pop())
eakf_state = eakf_state._replace(
particles=ParameterStatePair(params=params_t, state=state_t))
eakf_predict_state = ensemble_kalman_filter_predict(eakf_state, transition_fn)
# Clip the state and particles.
state_params = eakf_predict_state.particles
state_t = clip_state(
state_params.state, eakf_predict_state.extra.population)
state_params = ParameterStatePair(state=state_t, params=state_params.params)
# We preprocess the state and parameters by affixing a 1 dimension. This is because for
# inference, we treat each city as independent. We could also introduce localization by
# considering cities that are adjacent.
state_params = tf.nest.map_structure(lambda x: x[..., tf.newaxis], state_params)
eakf_predict_state = eakf_predict_state._replace(particles=state_params)
# Update step.
eakf_update_state = ensemble_adjustment_kalman_filter_update(
eakf_predict_state,
actual_reported_cases[..., j][..., tf.newaxis],
observation_fn)
state_params = tf.nest.map_structure(
lambda x: x[..., 0], eakf_update_state.particles)
# Clip to ensure parameters / state are well constrained.
state_t = clip_state(
state_params.state, eakf_update_state.extra.population)
# Finally for the parameters, we should reduce over all updates. We get
# an extra dimension back so let's do that.
params_t = tf.nest.map_structure(
lambda x, y: x + tf.reduce_sum(y[..., tf.newaxis] - x, axis=-2, keepdims=True),
eakf_predict_state.particles.params, state_params.params)
params_t = clip_params(params_t, seed=seeds.pop())
params_t = tf.nest.map_structure(lambda x: x[..., 0], params_t)
state_params = ParameterStatePair(state=state_t, params=params_t)
eakf_state = eakf_update_state
eakf_state = eakf_state._replace(particles=state_params)
# Flatten and collect the inferred parameter at time step t.
params_over_time = tf.nest.map_structure(
lambda s, x: tf.concat([s, x], axis=-1), params_over_time, params_t)
est_params = tf.nest.map_structure(
# Take the average over the Ensemble and over time.
lambda x: tf.math.reduce_mean(x, axis=[0, -1])[..., tf.newaxis],
params_over_time)
params_per_iter.append(est_params)
print('Iterated Filtering {} / {} Ran in: {:.2f} seconds'.format(
i, num_iterations, time.time() - start_if_time))
return tf.nest.map_structure(
lambda x: tf.squeeze(x, axis=-1), params_per_iter[-1])
Explanation: Finally we define the inference method. This is two loops, the outer loop
being Iterated Filtering while the inner loop is Ensemble Adjustment Kalman Filtering.
End of explanation
def clip_state(state, population):
Clip state to sensible values.
state = tf.nest.map_structure(
lambda x: tf.where(x < 0, 0., x), state)
# If S > population, then adjust as well.
susceptible = tf.where(state.susceptible > population, population, state.susceptible)
return SEIRComponents(
susceptible=susceptible,
exposed=state.exposed,
documented_infectious=state.documented_infectious,
undocumented_infectious=state.undocumented_infectious,
daily_new_documented_infectious=state.daily_new_documented_infectious)
def clip_params(params, seed):
Clip parameters to bounds.
def _clip(p, minval, maxval):
return tf.where(
p < minval,
minval * (1. + 0.1 * tf.random.stateless_uniform(p.shape, seed=seed)),
tf.where(p > maxval,
maxval * (1. - 0.1 * tf.random.stateless_uniform(
p.shape, seed=seed)), p))
params = tf.nest.map_structure(
_clip, params, PARAMETER_LOWER_BOUNDS, PARAMETER_UPPER_BOUNDS)
return params
Explanation: Final detail: clipping the parameters and state consists of making sure they are within range, and non-negative.
End of explanation
# Let's sample the parameters.
#
# NOTE: Li et al. run inference 1000 times, which would take a few hours.
# Here we run inference 30 times (in a single, vectorized batch).
best_parameters = inference(
num_ensembles=300,
num_batches=30,
num_iterations=10,
actual_reported_cases=observed_daily_infectious_count,
mobility_matrix_over_time=mobility_matrix_over_time)
Explanation: Running it all together
End of explanation
fig, axs = plt.subplots(2, 3)
axs[0, 0].boxplot(best_parameters.documented_infectious_tx_rate,
whis=(2.5,97.5), sym='')
axs[0, 0].set_title(r'$\beta$')
axs[0, 1].boxplot(best_parameters.undocumented_infectious_tx_relative_rate,
whis=(2.5,97.5), sym='')
axs[0, 1].set_title(r'$\mu$')
axs[0, 2].boxplot(best_parameters.intercity_underreporting_factor,
whis=(2.5,97.5), sym='')
axs[0, 2].set_title(r'$\theta$')
axs[1, 0].boxplot(best_parameters.average_latency_period,
whis=(2.5,97.5), sym='')
axs[1, 0].set_title(r'$Z$')
axs[1, 1].boxplot(best_parameters.fraction_of_documented_infections,
whis=(2.5,97.5), sym='')
axs[1, 1].set_title(r'$\alpha$')
axs[1, 2].boxplot(best_parameters.average_infection_duration,
whis=(2.5,97.5), sym='')
axs[1, 2].set_title(r'$D$')
plt.tight_layout()
Explanation: The results of our inferences. We plot the maximum-likelihood values for all the global paramters to show their variation across our num_batches independent runs of inference. This corresponds to Table S1 in the supplemental materials.
End of explanation |
11,847 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Data Loading and Processing Tutorial
Step2: Let’s quickly read the CSV and get the annotations in an (N, 2) array where N is the number of landmarks.
Step5: Dataset class
torch.utils.data.Dataset is an abstract class representing a dataset. Your custom dataset should inherit Dataset and override the following methods
Step6: Let’s instantiate this class and iterate through the data samples. We will print the sizes of first 4 samples and show their landmarks.
Step10: Transforms
One issue we can see from the above is that the samples are not of the same size. Most neural networks expect the images of a fixed size. Therefore, we will need to write some prepocessing code. Let’s create three transforms
Step11: Compose transforms
Step12: Iterating through the dataset
Step14: However, we are losing a lot of features by using a simple for loop to iterate over the data. In particular, we are missing out on | Python Code:
from __future__ import print_function, division
import os
import torch
import pandas as pd
from skimage import io, transform
import numpy as np
import matplotlib.pyplot as plt
from torch.utils.data import Dataset, DataLoader
from torchvision import transforms, utils
# Ignore warnings
import warnings
warnings.filterwarnings("ignore")
plt.ion() # interactive mode
Explanation: Data Loading and Processing Tutorial
End of explanation
landmarks_frame = pd.read_csv('faces/face_landmarks.csv')
n = 65
img_name = landmarks_frame.iloc[n, 0]
landmarks = landmarks_frame.iloc[n, 1:].as_matrix()
landmarks = landmarks.astype('float').reshape(-1, 2)
print('Image name: {}'.format(img_name))
print('Landmarks shape: {}'.format(landmarks.shape))
print('First 4 Landmarks: {}'.format(landmarks[:4]))
def show_landmarks(image, landmarks):
Show image with landmarks
plt.imshow(image)
plt.scatter(landmarks[:, 0], landmarks[:, 1], s=10, marker='.', c='r')
plt.pause(0.001) # pause a bit so that plots are updated
plt.figure()
show_landmarks(io.imread(os.path.join('faces/', img_name)),
landmarks)
plt.show()
Explanation: Let’s quickly read the CSV and get the annotations in an (N, 2) array where N is the number of landmarks.
End of explanation
class FaceLandmarksDataset(Dataset):
Face Landmarks dataset.
def __init__(self, csv_file, root_dir, transform=None):
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
self.landmarks_frame = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.landmarks_frame)
def __getitem__(self, idx):
img_name = os.path.join(self.root_dir,
self.landmarks_frame.iloc[idx, 0])
image = io.imread(img_name)
landmarks = self.landmarks_frame.iloc[idx, 1:].as_matrix()
landmarks = landmarks.astype('float').reshape(-1, 2)
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample
Explanation: Dataset class
torch.utils.data.Dataset is an abstract class representing a dataset. Your custom dataset should inherit Dataset and override the following methods:
len so that len(dataset) returns the size of the dataset.
getitem to support the indexing such that dataset[i] can be used to get ith sample
Let’s create a dataset class for our face landmarks dataset. We will read the csv in init but leave the reading of images to getitem. This is memory efficient because all the images are not stored in the memory at once but read as required.
Sample of our dataset will be a dict {'image': image, 'landmarks': landmarks}. Our datset will take an optional argument transform so that any required processing can be applied on the sample. We will see the usefulness of transform in the next section.
End of explanation
face_dataset = FaceLandmarksDataset(csv_file='faces/face_landmarks.csv',
root_dir='faces/')
fig = plt.figure()
for i in range(len(face_dataset)):
sample = face_dataset[i]
print(i, sample['image'].shape, sample['landmarks'].shape)
ax = plt.subplot(1, 4, i + 1)
plt.tight_layout()
ax.set_title('Sample #{}'.format(i))
ax.axis('off')
show_landmarks(**sample)
if i == 3:
plt.show()
break
Explanation: Let’s instantiate this class and iterate through the data samples. We will print the sizes of first 4 samples and show their landmarks.
End of explanation
class Rescale(object):
Rescale the image in a sample to a given size.
Args:
output_size (tuple or int): Desired output size. If tuple, output is
matched to output_size. If int, smaller of image edges is matched
to output_size keeping aspect ratio the same.
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
self.output_size = output_size
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
h, w = image.shape[:2]
if isinstance(self.output_size, int):
if h > w:
new_h, new_w = self.output_size * h / w, self.output_size
else:
new_h, new_w = self.output_size, self.output_size * w / h
else:
new_h, new_w = self.output_size
new_h, new_w = int(new_h), int(new_w)
img = transform.resize(image, (new_h, new_w))
# h and w are swapped for landmarks because for images,
# x and y axes are axis 1 and 0 respectively
landmarks = landmarks * [new_w / w, new_h / h]
return {'image': img, 'landmarks': landmarks}
class RandomCrop(object):
Crop randomly the image in a sample.
Args:
output_size (tuple or int): Desired output size. If int, square crop
is made.
def __init__(self, output_size):
assert isinstance(output_size, (int, tuple))
if isinstance(output_size, int):
self.output_size = (output_size, output_size)
else:
assert len(output_size) == 2
self.output_size = output_size
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
h, w = image.shape[:2]
new_h, new_w = self.output_size
top = np.random.randint(0, h - new_h)
left = np.random.randint(0, w - new_w)
image = image[top: top + new_h,
left: left + new_w]
landmarks = landmarks - [left, top]
return {'image': image, 'landmarks': landmarks}
class ToTensor(object):
Convert ndarrays in sample to Tensors.
def __call__(self, sample):
image, landmarks = sample['image'], sample['landmarks']
# swap color axis because
# numpy image: H x W x C
# torch image: C X H X W
image = image.transpose((2, 0, 1))
return {'image': torch.from_numpy(image),
'landmarks': torch.from_numpy(landmarks)}
Explanation: Transforms
One issue we can see from the above is that the samples are not of the same size. Most neural networks expect the images of a fixed size. Therefore, we will need to write some prepocessing code. Let’s create three transforms:
Rescale: to scale the image
RandomCrop: to crop from image randomly. This is data augmentation.
ToTensor: to convert the numpy images to torch images (we need to swap axes).
We will write them as callable classes instead of simple functions so that parameters of the transform need not be passed everytime it’s called. For this, we just need to implement call method and if required, init method. We can then use a transform like this:
End of explanation
scale = Rescale(256)
crop = RandomCrop(128)
composed = transforms.Compose([Rescale(256),
RandomCrop(224)])
# Apply each of the above transforms on sample.
fig = plt.figure()
sample = face_dataset[65]
for i, tsfrm in enumerate([scale, crop, composed]):
transformed_sample = tsfrm(sample)
ax = plt.subplot(1, 3, i + 1)
plt.tight_layout()
ax.set_title(type(tsfrm).__name__)
show_landmarks(**transformed_sample)
plt.show()
Explanation: Compose transforms
End of explanation
transformed_dataset = FaceLandmarksDataset(csv_file='faces/face_landmarks.csv',
root_dir='faces/',
transform=transforms.Compose([
Rescale(256),
RandomCrop(224),
ToTensor()
]))
for i in range(len(transformed_dataset)):
sample = transformed_dataset[i]
print(i, sample['image'].size(), sample['landmarks'].size())
if i == 3:
break
Explanation: Iterating through the dataset
End of explanation
dataloader = DataLoader(transformed_dataset, batch_size=4,
shuffle=True, num_workers=4)
# Helper function to show a batch
def show_landmarks_batch(sample_batched):
Show image with landmarks for a batch of samples.
images_batch, landmarks_batch = \
sample_batched['image'], sample_batched['landmarks']
batch_size = len(images_batch)
im_size = images_batch.size(2)
grid = utils.make_grid(images_batch)
plt.imshow(grid.numpy().transpose((1, 2, 0)))
for i in range(batch_size):
plt.scatter(landmarks_batch[i, :, 0].numpy() + i * im_size,
landmarks_batch[i, :, 1].numpy(),
s=10, marker='.', c='r')
plt.title('Batch from dataloader')
for i_batch, sample_batched in enumerate(dataloader):
print(i_batch, sample_batched['image'].size(),
sample_batched['landmarks'].size())
# observe 4th batch and stop.
if i_batch == 3:
plt.figure()
show_landmarks_batch(sample_batched)
plt.axis('off')
plt.ioff()
plt.show()
break
Explanation: However, we are losing a lot of features by using a simple for loop to iterate over the data. In particular, we are missing out on:
Batching the data
Shuffling the data
Load the data in parallel using multiprocessing workers.
torch.utils.data.DataLoader is an iterator which provides all these features. Parameters used below should be clear. One parameter of interest is collate_fn. You can specify how exactly the samples need to be batched using collate_fn. However, default collate should work fine for most use cases.
End of explanation |
11,848 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD
Step1: Load sample subject data
Step2: Plot the raw data and CSD-transformed raw data
Step3: Also look at the power spectral densities
Step4: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
Step5: First let's look at how CSD affects scalp topography
Step6: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution | Python Code:
# Authors: Alex Rockhill <[email protected]>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
Explanation: Transform EEG data using current source density (CSD)
This script shows an example of how to use CSD
:footcite:PerrinEtAl1987,PerrinEtAl1989,Cohen2014,KayserTenke2015.
CSD takes the spatial Laplacian of the sensor signal (derivative in both
x and y). It does what a planar gradiometer does in MEG. Computing these
spatial derivatives reduces point spread. CSD transformed data have a sharper
or more distinct topography, reducing the negative impact of volume conduction.
End of explanation
raw = mne.io.read_raw_fif(data_path + '/MEG/sample/sample_audvis_raw.fif')
raw = raw.pick_types(meg=False, eeg=True, eog=True, ecg=True, stim=True,
exclude=raw.info['bads']).load_data()
events = mne.find_events(raw)
raw.set_eeg_reference(projection=True).apply_proj()
Explanation: Load sample subject data
End of explanation
raw_csd = mne.preprocessing.compute_current_source_density(raw)
raw.plot()
raw_csd.plot()
Explanation: Plot the raw data and CSD-transformed raw data:
End of explanation
raw.plot_psd()
raw_csd.plot_psd()
Explanation: Also look at the power spectral densities:
End of explanation
event_id = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'smiley': 5, 'button': 32}
epochs = mne.Epochs(raw, events, event_id=event_id, tmin=-0.2, tmax=.5,
preload=True)
evoked = epochs['auditory'].average()
Explanation: CSD can also be computed on Evoked (averaged) data.
Here we epoch and average the data so we can demonstrate that.
End of explanation
times = np.array([-0.1, 0., 0.05, 0.1, 0.15])
evoked_csd = mne.preprocessing.compute_current_source_density(evoked)
evoked.plot_joint(title='Average Reference', show=False)
evoked_csd.plot_joint(title='Current Source Density')
Explanation: First let's look at how CSD affects scalp topography:
End of explanation
fig, ax = plt.subplots(4, 4)
fig.subplots_adjust(hspace=0.5)
fig.set_size_inches(10, 10)
for i, lambda2 in enumerate([0, 1e-7, 1e-5, 1e-3]):
for j, m in enumerate([5, 4, 3, 2]):
this_evoked_csd = mne.preprocessing.compute_current_source_density(
evoked, stiffness=m, lambda2=lambda2)
this_evoked_csd.plot_topomap(
0.1, axes=ax[i, j], outlines='skirt', contours=4, time_unit='s',
colorbar=False, show=False)
ax[i, j].set_title('stiffness=%i\nλ²=%s' % (m, lambda2))
Explanation: CSD has parameters stiffness and lambda2 affecting smoothing and
spline flexibility, respectively. Let's see how they affect the solution:
End of explanation |
11,849 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Disclaimer
Step1: Install requirements and login into the right email and project
Step2: Update Params
Step4: Generate and load sample GA dataset
Based on an anonymized public GA dataset. <br>
We are creating sample training and test datasets to use as input for the propensity model.
Step5: Load Training Data from BQ
Step6: Train & Test the model (Simple) - Logistic Regression model
We are using a Logistic Regression model which predicts if the user will convert. <br>
The model output is a 0 (false) or 1 (true) for each user.
Step7: Package & Upload Model to GCP (Simple)
Step8: Train, Test & Upload the model (Advanced) - Logistic Regression model with probability outputs
We are using a Logistic Regression model to predict if the user will convert. <br>
The model output is [class_label, probability], e.g. [1, 0.95]. That is, there's 95% chance the user will convert.
Step9: If model not created, create the model by uncommenting the first 2 lines.
Step10: (OPTIONAL) Testing predictions from the AI Platform
Step11: Logistic Regression Model (Simple)
Step12: Logistic Regression Model with Probabilty Outputs (Advanced)
Step13: Automation with Modem - Parameter Specification
1. Select BQ feature rows for model input
Say training/test dataset has the schema (in BQ) - 'id', 'feature1', 'feature2'. The model uses 'feature1' & 'feature2', then those are the column names. In this example, only 'all_visits' is used as an input column.
Step14: 2. Create mapping between BQ schema & Data Import schema
The idea here is to think about how the outputs should be mapped before testing and automation. This will be used in the automation piece. <br>
There are 3 distinct cases -
1. Data Import schema includes the same column from BigQuery (e.g. clientId) <br>
'clientId' | Python Code:
!python --version && gcloud components update
Explanation: Disclaimer: The following code demonstrates a sample model creation with AI Platform, based on GA-BQ export dataset. This is meant for inspiration only. We expect analysts/data scientists to identify the right set of features to create retargeted audiences based on their business needs.
Local Setup
Cloud AI Platform model versions need to be compatible across the Python interpreter, scikit-learn version, and AI Platform ML runtime. To maintain consistency, we'll be using Python 3.7, scikit-learn (0.20.4) and ML runtime 1.15.
As the default interpreter for Colab is Python 3.6, we'll be using a local runtime.
Open a shell on your system and follow the instructions -
1. Create & activate a virtualenv with Python 3.7, e.g.
python3.7 -m virtualenv venv && source venv/bin/activate
2. Type
pip install jupyter_http_over_ws
3. Type
jupyter serverextension enable --py jupyter_http_over_ws
4. Start local server:
jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' --port=8888 --NotebookApp.port_retries=0
5. Copy the server URL and paste in Backend URL field. ('Connect to a local runtime' on the top left)
Check if you are using python3.7 and update gcloud SDK. (Install if needed)
End of explanation
!pip install scikit-learn==0.20.4 google-cloud-bigquery pandas numpy google-api-python-client
!gcloud init
Explanation: Install requirements and login into the right email and project
End of explanation
GCP_PROJECT_ID = "" #@param {type:"string"}
BQ_DATASET = "" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
#@title Enter Model Parameters
GCS_MODEL_DIR = "gs://" #@param {type: "string"}
MODEL_NAME = "" #@param {type:"string"}
VERSION_NAME = "" #@param {type: "string"}
FRAMEWORK = "SCIKIT_LEARN" #@param ["SCIKIT_LEARN", "TENSORFLOW", "XGBOOST"]
if GCS_MODEL_DIR[-1] != '/':
GCS_MODEL_DIR = GCS_MODEL_DIR + '/'
import math
from google.cloud import bigquery
client = bigquery.Client(project=GCP_PROJECT_ID)
Explanation: Update Params
End of explanation
my_query =
WITH sample_raw_data AS (
SELECT CAST(CEIL(RAND() * 100) AS INT64) AS clientId, * EXCEPT (clientId) FROM `bigquery-public-data.google_analytics_sample.ga_sessions_20170801` LIMIT 1000
),
visit_data AS (
SELECT clientId, SUM(totals.visits) AS all_visits, CAST(ROUND(RAND() * 1) AS INT64) AS converted
FROM sample_raw_data
GROUP BY clientId
)
SELECT *
FROM visit_data
df = client.query(my_query).to_dataframe()
df.head()
training_data_size = math.ceil(df.shape[0] * 0.7)
training_data = df[:training_data_size]
test_data = df[training_data_size:]
training_data.to_csv('training.csv', index=False)
test_data.to_csv('test.csv', index=False)
BQ_TABLE_TRAINING = BQ_DATASET+".training_data"
BQ_TABLE_TEST = BQ_DATASET+".test_data"
!bq load --project_id $GCP_PROJECT_ID --autodetect --source_format='CSV' $BQ_TABLE_TRAINING training.csv
!bq load --project_id $GCP_PROJECT_ID --autodetect --source_format='CSV' $BQ_TABLE_TEST test.csv
Explanation: Generate and load sample GA dataset
Based on an anonymized public GA dataset. <br>
We are creating sample training and test datasets to use as input for the propensity model.
End of explanation
my_query = "SELECT * FROM `{0}.{1}`".format(GCP_PROJECT_ID,BQ_TABLE_TRAINING)
training = client.query(my_query).to_dataframe()
training.head()
Explanation: Load Training Data from BQ
End of explanation
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from googleapiclient import discovery
import pandas as pd
import numpy as np
import pickle
features, labels = training[["all_visits"]], training["converted"]
X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size = 0.2, random_state=1)
X_train.shape, X_test.shape
lr = LogisticRegression(penalty='l2')
lr.fit(X_train, y_train)
y_pred = lr.predict(X_test)
y_pred[:5]
accuracy_score(y_test, y_pred)
confusion_matrix(y_test, y_pred)
lr.predict_proba(X_test)
Explanation: Train & Test the model (Simple) - Logistic Regression model
We are using a Logistic Regression model which predicts if the user will convert. <br>
The model output is a 0 (false) or 1 (true) for each user.
End of explanation
with open('model.pkl', 'wb') as f:
pickle.dump(lr,f)
! gsutil cp model.pkl $GCS_MODEL_DIR
! gcloud config set project $GCP_PROJECT_ID
! gcloud ai-platform models create $MODEL_NAME --regions $REGION
! gcloud ai-platform versions create $VERSION_NAME --model $MODEL_NAME --origin $GCS_MODEL_DIR --runtime-version=1.15 --framework $FRAMEWORK --python-version=3.7
Explanation: Package & Upload Model to GCP (Simple)
End of explanation
%%writefile predictor.py
import os
import pickle
import numpy as np
class MyPredictor(object):
def __init__(self, model):
self._model = model
def predict(self, instances, **kwargs):
inputs = np.asarray(instances)
probabilities = self._model.predict_proba(inputs).tolist()
outputs = [[p.index(max(p)), max(p)] for p in probabilities] #label, probability
return outputs
@classmethod
def from_path(cls, model_dir):
model_path = os.path.join(model_dir, 'model.pkl')
with open(model_path, 'rb') as f:
model = pickle.load(f)
return cls(model)
%%writefile setup.py
from setuptools import setup
setup(
name='my_custom_code',
version='0.1',
scripts=['predictor.py'])
GCS_CUSTOM_ROUTINE_PATH = GCS_MODEL_DIR +"my_custom_code-0.1.tar.gz"
GCS_MODEL_PATH = GCS_MODEL_DIR + "model/"
ADVANCED_VERSION_NAME = VERSION_NAME + "_2"
!python setup.py sdist --formats=gztar
!gsutil cp model.pkl $GCS_MODEL_PATH
!gsutil cp ./dist/my_custom_code-0.1.tar.gz $GCS_CUSTOM_ROUTINE_PATH
Explanation: Train, Test & Upload the model (Advanced) - Logistic Regression model with probability outputs
We are using a Logistic Regression model to predict if the user will convert. <br>
The model output is [class_label, probability], e.g. [1, 0.95]. That is, there's 95% chance the user will convert.
End of explanation
#!gcloud config set project $GCP_PROJECT_ID
#!gcloud ai-platform models create $MODEL_NAME --regions $REGION
!gcloud beta ai-platform versions create $ADVANCED_VERSION_NAME --model $MODEL_NAME --origin $GCS_MODEL_PATH --runtime-version=1.15 --python-version=3.7 --package-uris $GCS_CUSTOM_ROUTINE_PATH --prediction-class predictor.MyPredictor
Explanation: If model not created, create the model by uncommenting the first 2 lines.
End of explanation
my_query = "SELECT * FROM `{0}.{1}`".format(GCP_PROJECT_ID,BQ_TABLE_TEST)
test = client.query(my_query).to_dataframe()
features_df = test["all_visits"]
features = features_df.values.tolist()
features = [[f] for f in features] if len(np.array(features).shape) == 1 else features
features[:5]
Explanation: (OPTIONAL) Testing predictions from the AI Platform
End of explanation
ai_platform = discovery.build("ml", "v1")
name = 'projects/{}/models/{}/versions/{}'.format(GCP_PROJECT_ID, MODEL_NAME, VERSION_NAME)
response = ai_platform.projects().predict(name=name, body={'instances': features}).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
predictions = response['predictions']
print(predictions[:5])
test['predicted'] = predictions
test.head()
accuracy_score(test['converted'], test['predicted'])
confusion_matrix(test['converted'], test['predicted'])
Explanation: Logistic Regression Model (Simple)
End of explanation
ai_platform = discovery.build('ml', 'v1')
name = 'projects/{}/models/{}/versions/{}'.format(GCP_PROJECT_ID, MODEL_NAME, ADVANCED_VERSION_NAME)
response = ai_platform.projects().predict(name=name, body={'instances': features}).execute()
if 'error' in response:
raise RuntimeError(response['error'])
else:
predictions = response['predictions']
print(predictions[:5])
test['advanced_labels'] = [p[0] for p in predictions]
test['advanced_probs'] = [p[1] for p in predictions]
test.head()
def postprocess_output(df):
df = df[df['advanced_labels'] == 1] #predicted to convert
df['decile'] = pd.qcut(df['advanced_probs'], 10, labels=False, duplicates='drop')
col_mapper = {'decile': 'ga:dimension1',
'clientId': 'ga:userId'}
df_col_names = list(col_mapper.keys())
export_names = [col_mapper[key] for key in df_col_names]
df = df[df_col_names]
df.columns = export_names
return df
postprocess_output(test)
Explanation: Logistic Regression Model with Probabilty Outputs (Advanced)
End of explanation
MODEL_INPUT_COL_NAMES = ['all_visits']
Explanation: Automation with Modem - Parameter Specification
1. Select BQ feature rows for model input
Say training/test dataset has the schema (in BQ) - 'id', 'feature1', 'feature2'. The model uses 'feature1' & 'feature2', then those are the column names. In this example, only 'all_visits' is used as an input column.
End of explanation
#case 2
CSV_COLUMN_MAP = {'clientId': 'ga:userId',
'predicted': 'ga:dimension1'}
#case 3
CSV_COLUMN_MAP = {'clientId': 'ga:userId',
'decile': 'ga:dimension2'}
Explanation: 2. Create mapping between BQ schema & Data Import schema
The idea here is to think about how the outputs should be mapped before testing and automation. This will be used in the automation piece. <br>
There are 3 distinct cases -
1. Data Import schema includes the same column from BigQuery (e.g. clientId) <br>
'clientId': 'ga:userId'
2. Data Import schema includes the model output without any post processing (e.g. kMeans cluster number, logistic class number) In this case, always use the predicted key as follows - <br>
'predicted': 'ga:dimension1'
3. Data Import schema includes the model output without post processing (e.g. predict_proba output from logistic regression model) - <br> In this case, the key should be the same as the intended post-processed column name (say, decile). Check the example above for more details. <br>
'decile': 'ga:dimension2'
End of explanation |
11,850 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Report 3
by Kaitlyn Keil and Kevin Zhang
April 2017
Data Science Spring 2017
For reference on the database used
Step2: We then create a function to read in our dataset and clean it, pruning specifically the columns that we care about.
Step3: We then create our cleaned dataset.
Step4: To get a sense of what we're dealing with, we directly plotted all foods on a log-log scale using a scatter plot, with the Carbs and Proteins as the axes and Fat as the radius of the dots. At first glance, it appears that the foods aren't particularly differentiated by nutrient content.
Step6: We created a function that could take in a category and plot a 3D scatter plot of the data, thus giving a modular function to use going into the rest of the code.
Step7: We now begin creating labels for the different food groups. Here we add a new column to hold the category given to the food by the Food Pyramid. We then create a new dataframe with only these categories for use in the rest of the code. The specific labels given are based on later knowledge of which K-Means groups matched the best with the Food Pyramid groups based on nutrient content.
Step8: We now create a K-Means object and run it on the food data with only the macronutrients as criteria. We include sugar because we believe that sugar is also a very useful metric both in categorizing food and in helping people make decisions about diet.
Step9: Below is the 3D scatter plot showing all the clusters from the K-Means algorithm.
Step10: We now separate out different categories for analysis.
Step11: We the make another column that holds the correct guess for each food, in other words whether that food based on its K-Means group was placed in the same group as the one given to it from the Food Pyramid.
Step12: We took all the categories from K-Means and displayed each cluster's average nutrient content, which told us what group was most simiar to its corresponding one from the Food Pyramid.
Step13: Following are two examples, plotted on the 3d log scale. For the rest of the comparisons, see the bottom of the journal. This is a comparison of the two meat groups, one given by the Food Pyramid, and one proposed by the K-Means algorithm.
Step14: This one is the same as above, except it shows a comparison between the two for the fruit group.
Step15: We then generate an accuracy score using sklearn, which evaluates whether a particular food in K-Means was categorized in the same group as in the Food Pyramid across all foods. The result, 57% is barely above half.
Step16: To gain a better understanding of what's happening, we decided to create a confusion matrix to view all the different individual accuracies between clusters. Below shows all of the clusters and where the foods ended up being grouped. Note that a perfect match would mean that the main diagonal of the matrix would be entirely black.
Step18: We then created another function that directly compares the most similar food groups between K-Means and the Food Pyramid. This modular function compares two labels against each other, and as stated above, the labels were created with later knowledge of the most similar groupings.
Step19: Below are the 3D scatter plots on log-log-log scale for the 6 different food groups from the Food Pyramid with their most similar cluster from the K-Means algorithm. Note that while some are pretty coincident, others are completely off.
Step20: We took the foods from the Food Pyramid and distributed them across the different clusters from K-Means, to really explicitly see which where the different foods lie. It would appear that the groupings from the Food Pyramid are completely scattered among the different clusters from K-Means, showing that their groupings are not particuarly in line with the foods' nutrient content.
Step22: Below are some examples of foods within the 6 groups made by the K-Means algorithm. We present to you, for fun, a small list of foods in each category, revealing some interesting wonders, as well as potentially some flaw in our design choices.
Step23: The proposed Meat group, followed by the proposed Vegetable group.
Step24: The proposed Cereal Group, followed by the proposed Fruits group.
Step25: The proposed Fat group, followed by the proposed Dairy group. | Python Code:
from __future__ import print_function, division
import pandas as pd
import sys
import numpy as np
import math
import matplotlib.pyplot as plt
from sklearn.feature_extraction import DictVectorizer
%matplotlib inline
import seaborn as sns
from collections import defaultdict, Counter
import statsmodels.formula.api as smf
from mpl_toolkits import mplot3d
from sklearn.cluster import KMeans
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
Explanation: Report 3
by Kaitlyn Keil and Kevin Zhang
April 2017
Data Science Spring 2017
For reference on the database used: UK Nutrient Database
To view variables and other user information about the database: User Documentation
This notebook is meant to assess the question of whether the Food Pyramid as prescribed by government and social standards is a useful tool in creating a balanced diet plan for oneself. This question stems from the controversy over whether the Food Pyramid is truly an accurate representation of food, and thus creating a divide between people who adhere to it as the food bible and others who go off on their own to find their own balanced diet.
The database used is the UK Nutrient database, which contains lots of information about nutrient composition of over 3000 common and popular foods throughout the UK. The granularity of the database makes it a very relevant and informative dataset for the purposes of answering the question.
To answer the question of whether the Food Pyramid is a "good" representation of food groups, we define "good" as being similar in its nutrient composition, specifically the macronutrients of Fats, Carbs, and Proteins. We choose this criteria because this is also how the Food Pyramid supposedly created its categories. We will use K-Means Clustering on the dataset of foods unsupervised using solely their nutrient compositions as criteria, and then compare the naturally clustered groups with the groups given by the Food Pyramid.
Overally, it appears that the Food Pyramid can be said to be outdated. Out of all foods evaluated, only about half were correctly categorized based on macronutrients. Most of the groups made by the Food Pyramid were scattered into several groups in the K-Means clusters. Perhaps the people who moved on to find their own balanced diet were on to something.
To view our code and a more detailed version of our work, continue reading.
To begin, we bring in all the imports.
End of explanation
def ReadProximates():
Reads the correct sheet from the Excel Spreadsheet downloaded from the databank.
Cleans the macronutrient data and replaces non-numerical entries with 0.
Returns: cleaned DataFrame
df = pd.read_excel('dietary.xls', sheetname='Proximates')
column_list = ['Water (g)', 'Protein (g)', 'Fat (g)', 'Carbohydrate (g)', 'Total sugars (g)']
df['Protein'] = pd.to_numeric(df['Protein (g)'], errors='coerce')
df['Fat'] = pd.to_numeric(df['Fat (g)'], errors='coerce')
df['Carbohydrate'] = pd.to_numeric(df['Carbohydrate (g)'], errors='coerce')
df['Sugars'] = pd.to_numeric(df['Total sugars (g)'], errors='coerce')
df['Protein'].replace([np.nan], 0, inplace=True)
df['Fat'].replace([np.nan], 0, inplace=True)
df['Carbohydrate'].replace([np.nan], 0, inplace=True)
df['Sugars'].replace([np.nan], 0, inplace=True)
return df
Explanation: We then create a function to read in our dataset and clean it, pruning specifically the columns that we care about.
End of explanation
tester = ReadProximates()
Explanation: We then create our cleaned dataset.
End of explanation
x_vals = 'Protein'
y_vals = 'Carbohydrate'
z_vals = 'Fat'
food_group_dict = {'A':['Cereals','peru'], 'B':['Dairy','beige'], 'C':['Egg','paleturquoise'],
'D':['Vegetable','darkolivegreen'], 'F':['Fruit','firebrick'], 'G':['Nuts','saddlebrown'],
'J':['Fish','slategray'],'M':['Meat','indianred'], 'O':['Fat','khaki']}
ax = plt.subplot(111)
for key,val in food_group_dict.items():
df = tester[tester.Group.str.startswith(key, na=False)]
ax.scatter(df[x_vals],df[y_vals],df[z_vals],color=val[1],label = val[0])
plt.xscale('log')
plt.yscale('log')
ax.set_xlabel(x_vals+' (g)')
ax.set_ylabel(y_vals+' (g)')
ax.legend()
Explanation: To get a sense of what we're dealing with, we directly plotted all foods on a log-log scale using a scatter plot, with the Carbs and Proteins as the axes and Fat as the radius of the dots. At first glance, it appears that the foods aren't particularly differentiated by nutrient content.
End of explanation
def ThreeDPlot(pred_cat, actual_cat, ax, actual_label, colors = ['firebrick', 'peru']):
Creates a 3D log log plot on the requested subplot.
Arguments:
pred_cat = predicted dataframe for a category
actual_cat = dataframe of the real category
ax = plt axis instance
actual_label = string with label for the actual category
colors = list with two entries of strings for color names
ax.scatter3D(np.log(pred_cat.Protein),np.log(pred_cat.Carbs), np.log(pred_cat.Fat), c = colors[0], label = 'Predicted Group')
ax.scatter3D(np.log(actual_cat.Protein),np.log(actual_cat.Carbohydrate), np.log(actual_cat.Fat), c = colors[1], label = actual_label, alpha= .5)
ax.view_init(elev=10, azim=45)
ax.set_xlabel('Protein (log g)')
ax.set_ylabel('Carbohydrate (log g)')
ax.set_zlabel('Fat (log g)')
plt.legend()
Explanation: We created a function that could take in a category and plot a 3D scatter plot of the data, thus giving a modular function to use going into the rest of the code.
End of explanation
cereals = tester[tester.Group.str.startswith('A', na=False)]
cereals['Label'] = cereals.Protein*0+2
fruits = tester[tester.Group.str.startswith('F', na=False)]
fruits['Label'] = fruits.Protein*0+3
veggies = tester[tester.Group.str.startswith('D', na=False)]
veggies['Label'] = veggies.Protein*0+1
dairy = tester[tester.Group.str.startswith('B', na=False)]
dairy['Label'] = dairy.Protein*0+5
oils = tester[tester.Group.str.startswith('O', na=False)]
oils['Label'] = oils.Protein*0+4
m1 = tester[tester.Group.str.startswith('J', na=False)]
m2 = tester[tester.Group.str.startswith('M', na=False)]
meats = pd.concat([m1,m2])
meats['Label'] = meats.Protein*0
all_these = pd.concat([cereals, fruits, veggies, dairy, oils, meats])
Explanation: We now begin creating labels for the different food groups. Here we add a new column to hold the category given to the food by the Food Pyramid. We then create a new dataframe with only these categories for use in the rest of the code. The specific labels given are based on later knowledge of which K-Means groups matched the best with the Food Pyramid groups based on nutrient content.
End of explanation
# Selects the appropriate macronutrient columns to feed to the kmeans algorithm
protein = pd.Series(all_these.Protein, name='Protein')
fat = pd.Series(all_these.Fat, name='Fat')
carbs = pd.Series(all_these.Carbohydrate, name='Carbs')
sugars = pd.Series(all_these['Sugars'], name='Sugars')
# Create a new DataFrame using only the macronutrient columns
X = pd.concat([protein,fat,carbs,sugars], axis=1)
X.fillna(0)
kmeans = KMeans(n_clusters=6, random_state=0)
kmeans.fit(X.dropna())
y_kmeans = kmeans.predict(X)
Explanation: We now create a K-Means object and run it on the food data with only the macronutrients as criteria. We include sugar because we believe that sugar is also a very useful metric both in categorizing food and in helping people make decisions about diet.
End of explanation
ax = plt.subplot(projection='3d')
ax.scatter3D(np.log(X.Protein),np.log(X.Carbs), np.log(X.Fat), c = y_kmeans)
ax.view_init(elev=10, azim=45)
ax.set_xlabel('Protein (log g)')
ax.set_ylabel('Carbohydrate (log g)')
ax.set_zlabel('Fat (log g)')
Explanation: Below is the 3D scatter plot showing all the clusters from the K-Means algorithm.
End of explanation
# Create a way to select the categories
predicted_labels = pd.DataFrame(y_kmeans, index=X.index).astype(float)
X['predictions'] = predicted_labels
# Separate out the categories for individual analysis
labeled0 = X[X.predictions == 0]
labeled1 = X[X.predictions == 1]
labeled2 = X[X.predictions == 2]
labeled3 = X[X.predictions == 3]
labeled4 = X[X.predictions == 4]
labeled5 = X[X.predictions == 5]
Explanation: We now separate out different categories for analysis.
End of explanation
all_these['guess'] = predicted_labels[0]
all_these['correct_guess'] = np.where((all_these.Label == all_these.guess), True, False)
Explanation: We the make another column that holds the correct guess for each food, in other words whether that food based on its K-Means group was placed in the same group as the one given to it from the Food Pyramid.
End of explanation
all_these.groupby('guess').mean()
Explanation: We took all the categories from K-Means and displayed each cluster's average nutrient content, which told us what group was most simiar to its corresponding one from the Food Pyramid.
End of explanation
ax = plt.subplot(projection='3d')
ThreeDPlot(labeled0, meats, ax, 'Meats', ['firebrick','slategray'])
Explanation: Following are two examples, plotted on the 3d log scale. For the rest of the comparisons, see the bottom of the journal. This is a comparison of the two meat groups, one given by the Food Pyramid, and one proposed by the K-Means algorithm.
End of explanation
ax = plt.subplot(projection='3d')
ThreeDPlot(labeled3, fruits, ax, 'Fruits', ['firebrick','purple'])
Explanation: This one is the same as above, except it shows a comparison between the two for the fruit group.
End of explanation
accuracy_score(all_these.Label,predicted_labels)
Explanation: We then generate an accuracy score using sklearn, which evaluates whether a particular food in K-Means was categorized in the same group as in the Food Pyramid across all foods. The result, 57% is barely above half.
End of explanation
# Look at confusion matrix for some idea of accuracy. Meats has the highest rate of matching.
labels = ["meats", "vegetables", "cereal", "fruit", "oils", "dairy"]
predlabels = ["high protein", "all low", "high carb, low sugar", "high carb, high sugar", "high fat", "all medium"]
mat = confusion_matrix(all_these.Label, predicted_labels)
sns.heatmap(mat.T, square=True, xticklabels=labels, yticklabels=predlabels, annot=True, fmt="d", linewidth=.5)
plt.xlabel('Food Pyramid label')
plt.ylabel('K-Means label')
plt.title("Matrix Comparison of K-Means vs. Food Pyramid")
Explanation: To gain a better understanding of what's happening, we decided to create a confusion matrix to view all the different individual accuracies between clusters. Below shows all of the clusters and where the foods ended up being grouped. Note that a perfect match would mean that the main diagonal of the matrix would be entirely black.
End of explanation
def HowMatched3D(df, label_int, actual_label):
Creates a 3D log log plot on the requested subplot.
Arguments:
pred_cat = predicted dataframe for a category
actual_cat = dataframe of the real category
ax = plt axis instance
actual_label = string with label for the actual category
colors = list with two entries of strings for color names
ax = plt.subplot(projection='3d')
TP = df[(df.Label == label_int)&(df.correct_guess==True)]
FP = df[(df.guess == label_int)&(df.correct_guess==False)]
FN = df[(df.Label == label_int)&(df.correct_guess==False)]
print('Matches:',len(TP), 'In Group, is not '+actual_label+':',len(FP), 'Not in Group, is '+actual_label+':',len(FN))
ax.scatter3D(np.log(TP.Protein),np.log(TP.Carbohydrate), np.log(TP.Fat), c = '#8F008F', label = 'In Group, is '+actual_label)
ax.scatter3D(np.log(FP.Protein),np.log(FP.Carbohydrate), np.log(FP.Fat), c = '#EB4C4C', label = 'In Group, is not '+actual_label)
ax.scatter3D(np.log(FN.Protein),np.log(FN.Carbohydrate), np.log(FN.Fat), c = '#4CA6FF', label = 'Not in Group, is '+actual_label)
ax.view_init(elev=10, azim=45)
ax.set_xlabel('Protein (log g)')
ax.set_ylabel('Carbohydrate (log g)')
ax.set_zlabel('Fat (log g)')
plt.legend()
Explanation: We then created another function that directly compares the most similar food groups between K-Means and the Food Pyramid. This modular function compares two labels against each other, and as stated above, the labels were created with later knowledge of the most similar groupings.
End of explanation
HowMatched3D(all_these, 0, 'Meat')
HowMatched3D(all_these, 1, 'Vegetable')
HowMatched3D(all_these, 2, 'Cereal')
HowMatched3D(all_these, 3, 'Fruit')
HowMatched3D(all_these, 4, 'Oil')
HowMatched3D(all_these, 5, 'Dairy')
Explanation: Below are the 3D scatter plots on log-log-log scale for the 6 different food groups from the Food Pyramid with their most similar cluster from the K-Means algorithm. Note that while some are pretty coincident, others are completely off.
End of explanation
df = pd.DataFrame(mat.T/mat.T.sum(axis=0),
index=["high protein", "all low", "high carb, low sugar", "high carb, high sugar", "high fat", "all medium"],
columns=["meats", "vegetables", "cereals", "fruits", "fats", "dairy"])
df.columns.name = 'Group Breakdown (percentages)'
df = df.round(2)
df = df.multiply(100)
df = df.astype(int).astype(str)
for index, row in df.iterrows():
for i in range(6):
df.loc[index][i] = str(df.loc[index][i]) + "%"
df
Explanation: We took the foods from the Food Pyramid and distributed them across the different clusters from K-Means, to really explicitly see which where the different foods lie. It would appear that the groupings from the Food Pyramid are completely scattered among the different clusters from K-Means, showing that their groupings are not particuarly in line with the foods' nutrient content.
End of explanation
def Examples(df, label_int, si = [0,5]):
Creates a 3D log log plot on the requested subplot.
Arguments:
pred_cat = predicted dataframe for a category
actual_cat = dataframe of the real category
ax = plt axis instance
actual_label = string with label for the actual category
colors = list with two entries of strings for color names
TP = df[(df.Label == label_int)&(df.correct_guess==True)]
FP = df[(df.guess == label_int)&(df.correct_guess==False)]
print("Guessed Similar:")
print(TP["Food Name"][si[0]:si[1]])
print("\nSurprising:")
print(FP["Food Name"][si[0]:si[1]])
Explanation: Below are some examples of foods within the 6 groups made by the K-Means algorithm. We present to you, for fun, a small list of foods in each category, revealing some interesting wonders, as well as potentially some flaw in our design choices.
End of explanation
print('High Protein Group')
Examples(all_these, 0)
print('\nLow Everything Group')
Examples(all_these, 1)
Explanation: The proposed Meat group, followed by the proposed Vegetable group.
End of explanation
print('High Carb, Low Sugar Group')
Examples(all_these, 2)
print('\nHigh Carb, High Sugar Group')
Examples(all_these, 3)
Explanation: The proposed Cereal Group, followed by the proposed Fruits group.
End of explanation
print('High Fat Group')
Examples(all_these, 4)
print('\nMid Everything Group')
Examples(all_these, 5)
Explanation: The proposed Fat group, followed by the proposed Dairy group.
End of explanation |
11,851 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
The Bitrepository quickstart is a package for quickly getting a basic fully functioning bitrepository reference system up on a localhost.
The quickstart will setup a bitrepository comprised of
Step1: Webserver
Per design the bitrepository uses a webdav server for file transfer. The quickstart configuration assumes that a webdav server is available on http
Step2: Runtime requirements
To run the quickstart a few system requirements must be in place.
A java runtime enviroment 1.8 or newer is needed.
For the quickstart.sh script, the curl is needed to retrieve the Tomcat servlet container.
Running the quickstart
When having ensured the above mentioned requirements are in place the quickstart package should be obtained from
Step3: Sanity Tests
Test for checking the sanity of the system as a whole. This means a roundtrip with only sunshine usage of the system getting through all the core functionality
Sunshine roundtrip
The purpose of the test is to make certain that the core functionality works as expected when behaving nicely. The test is designed such that the system state is the same prior to the test starting and after ending it - provided that the test passes. This also means that the test can be repeated an arbitrary number of times as long as it passes.
In the following the same fileID is used in all operations. The only restriction is that the fileID is valid for all components in the test, and is not already present in the collection.
List all fileID's to and note which files are present in the collection.
Step4: Put a file with an allowed fileID which is not already in the collection.
Wait for the put file operation to finish with success for all pillars.
Step5: List all files
Verify that the file put in 1 is present in the listing and that it is present on all (3) pillars in the collection.
Step6: Trigger a collection of audittrails, and verify that the new file is present in the list of audit trail events.
http
Step7: Request a salted checksum request.
Verify that the checksum pillar is unable to perform the operation.
Step8: Verify that all full pillars deliver and agree on the new calculation.
Step9: Verify that the salted checksum differs from the non-salted.
Step10: Get file from pillar
Verify its content is the same as the file that was previously put.
Step11: Replace the file with a new file with different content than the original on each pillar.
Step12: Wait for each pillar to complete the operation.
List all files
Verify that the fileID is still present on all pillars.
Step13: Calculate checksum for the file.
Verify that all pillars deliver and agree on the checksum.
Step14: Verify that the checksum differs from the one for the old file.
Step15: Get file from pillar
Verify that the files content is the same as the file uploaded during the replace action.
Step16: Delete the file on all pillars
Wait for the operation(s) to complete with success
Step17: List all files
Verify that the file is no longer present on any pillars.
Step18: Stopping and restarting the quickstart
Use the quickstart script to stop and start the quickstart components
Step19: Use these commands to stop the docker servers again | Python Code:
!docker run \
--detach \
--rm \
--env 'ACTIVEMQ_MIN_MEMORY=512' \
--env 'ACTIVEMQ_MAX_MEMORY=2048' \
--publish 61616:61616 \
--name activemq \
webcenter/activemq:5.12.0 \
/opt/activemq/bin/activemq console
Explanation: Introduction
The Bitrepository quickstart is a package for quickly getting a basic fully functioning bitrepository reference system up on a localhost.
The quickstart will setup a bitrepository comprised of:
* Webclient
* AuditTrail service
* Alarm service
* Integrity service
* Monitoring (status) service
* 1 checksum pillar
* 2 reference pillars
As the setup is basic – encryption, authentication and authorization is not enabled in the quickstart.
Prerequisites and requirements
For the quickstart to work some prerequisites and requirements exists.
All the quickstart components are meant to run on the same machine in an Linux environment. Possibility accessible from other machines, provided that firewall rules allow.
Infrastructure
The quickstart needs some infrastructure, for message exchange and file exchange
Message bus
The bitrepository needs an Apache Active MQ for sending messages between components. In the quickstart it is assumed that it is accessible on the localhost on the default port using a tcp connection. I.e. tcp://localhost:61616
It must not require any authentication to connect.
The default settings from the Apache Active MQ destribution will suffice, but the user/deployer is free to make changes to the Active MQ installation as long as the MQ can be reached as described above.
If the MQ is not available on the above url, the pillars and services simply will not work.
Apache Active MQ can be obtained from the ActiveMQ download site.
You can run this process in docker with this command
End of explanation
!docker run \
--detach \
--rm \
--publish 80:80 \
--name webdav \
blekinge/apache_webdav
Explanation: Webserver
Per design the bitrepository uses a webdav server for file transfer. The quickstart configuration assumes that a webdav server is available on http://localhost/dav/.
It must not require any authentication to connect.
If no Webdav server is available see File Exchange Server Setup for references on setup.
Should there be a HTTP server running on localhost that does not support webdav, or that use of another webdav server is wanted, then the ReferenceSettings.xml files for the AuditTrailService and CommandLine client needs to be have their FileExchange section changed to reflect this.
As the file location is specified per request this is not a hard requirement for deployment of the quickstart package, but something that needs to be taken into consideration.
You can run a apache2 based webdav server in docker with this easy command
End of explanation
%%bash
wget -Nq "https://sbforge.org/nexus/service/local/artifact/maven/redirect?g=org.bitrepository.reference&a=bitrepository-integration&v=LATEST&r=snapshots&p=tar.gz&c=quickstart&" -O quickstart.tgz
tar -xzf quickstart.tgz
%%bash
cd bitrepository-quickstart
./setup.sh
%alias bitmag bitrepository-quickstart/commandline/bin/bitmag.sh %l
Explanation: Runtime requirements
To run the quickstart a few system requirements must be in place.
A java runtime enviroment 1.8 or newer is needed.
For the quickstart.sh script, the curl is needed to retrieve the Tomcat servlet container.
Running the quickstart
When having ensured the above mentioned requirements are in place the quickstart package should be obtained from: Quickstart (newest release, devel version).
The quickstart tar.gz should be unpackaged.
Via the commandline cd to the unpacked directory and run the command "./setup.sh"
The first time running the setup script will adapt the configuration files to work with the deployed destination. Thus the quickstart will stop working if the quickstart directory is moved to another destination after the first run.
Running the setup.sh script does the following:
Adapt the configuration files to the environment (first run only)
Create sub directories and deploy the needed components to them.
Start the pillars
Download a Tomcat server for services and webclient
Deploy services and webclient to Tomcat server and start it.
After the script has finished the system should be accessible through: http://localhost:8080/bitrepository-webclient
End of explanation
%bitmag get-file-ids -c books
Explanation: Sanity Tests
Test for checking the sanity of the system as a whole. This means a roundtrip with only sunshine usage of the system getting through all the core functionality
Sunshine roundtrip
The purpose of the test is to make certain that the core functionality works as expected when behaving nicely. The test is designed such that the system state is the same prior to the test starting and after ending it - provided that the test passes. This also means that the test can be repeated an arbitrary number of times as long as it passes.
In the following the same fileID is used in all operations. The only restriction is that the fileID is valid for all components in the test, and is not already present in the collection.
List all fileID's to and note which files are present in the collection.
End of explanation
%bitmag put-file -c books -f README.md
Explanation: Put a file with an allowed fileID which is not already in the collection.
Wait for the put file operation to finish with success for all pillars.
End of explanation
%bitmag get-file-ids -c books
Explanation: List all files
Verify that the file put in 1 is present in the listing and that it is present on all (3) pillars in the collection.
End of explanation
%bitmag get-checksums -c books -i README.md
Explanation: Trigger a collection of audittrails, and verify that the new file is present in the list of audit trail events.
http://localhost:8080/bitrepository-webclient/audit-trail-service.html
Trigger a full integrity check, and verify that the new file is present and consistent.
http://localhost:8080/bitrepository-webclient/integrity-service.html
Request a MD5 checksum for the file.
Verify that all pillars deliver the requested checksum, and that all pillars agree on the checksum value.
End of explanation
%bitmag get-checksums -c books -i README.md -S abcd -R MD5 -p checksum-pillar
Explanation: Request a salted checksum request.
Verify that the checksum pillar is unable to perform the operation.
End of explanation
%bitmag get-checksums -c books -i README.md -S abcd -R MD5
Explanation: Verify that all full pillars deliver and agree on the new calculation.
End of explanation
%bitmag get-checksums -c books -i README.md -R MD5
%bitmag get-checksums -c books -i README.md -S 'abcd' -R MD5
Explanation: Verify that the salted checksum differs from the non-salted.
End of explanation
!md5sum README.md
%bitmag get-file -c books -i README.md -l README.md.tmp
!md5sum README.md.tmp
Explanation: Get file from pillar
Verify its content is the same as the file that was previously put.
End of explanation
!xmllint bitrepository-quickstart/commandline/logback.xml > logback.changed.xml
oldhash=!md5sum bitrepository-quickstart/commandline/logback.xml | cut -d' ' -f1
%bitmag replace-file -c books -i logback.xml -f logback.changed.xml -C {oldhash} -p file1-pillar
%bitmag replace-file -c books -i logback.xml -f logback.changed.xml -C {oldhash} -p file2-pillar
%bitmag replace-file -c books -i logback.xml -f logback.changed.xml -C {oldhash} -p checksum-pillar
Explanation: Replace the file with a new file with different content than the original on each pillar.
End of explanation
%bitmag get-file-ids -c books
Explanation: Wait for each pillar to complete the operation.
List all files
Verify that the fileID is still present on all pillars.
End of explanation
%bitmag get-checksums -c books -i logback.xml
Explanation: Calculate checksum for the file.
Verify that all pillars deliver and agree on the checksum.
End of explanation
!md5sum bitrepository-quickstart/commandline/logback.xml
%bitmag get-checksums -c books -i logback.xml
Explanation: Verify that the checksum differs from the one for the old file.
End of explanation
%bitmag get-file -c books -i logback.xml -l logback.downloaded.xml
!md5sum logback.downloaded.xml
!md5sum logback.changed.xml
Explanation: Get file from pillar
Verify that the files content is the same as the file uploaded during the replace action.
End of explanation
oldhash=!md5sum logback.changed.xml | cut -d' ' -f1
%bitmag delete -c books -i logback.xml -C {oldhash} -p file1-pillar
%bitmag delete -c books -i logback.xml -C {oldhash} -p file2-pillar
%bitmag delete -c books -i logback.xml -C {oldhash} -p checksum-pillar
Explanation: Delete the file on all pillars
Wait for the operation(s) to complete with success
End of explanation
%bitmag get-file-ids -c books -i logback.xml
Explanation: List all files
Verify that the file is no longer present on any pillars.
End of explanation
!bitrepository-quickstart/quickstart.sh stop
Explanation: Stopping and restarting the quickstart
Use the quickstart script to stop and start the quickstart components:
End of explanation
!docker stop webdav
!docker stop activemq
!docker ps
Explanation: Use these commands to stop the docker servers again
End of explanation |
11,852 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features
Step1: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: Before proceeding lets check the geometry.
Step10: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
Step11: With the geometry and materials finished, we now just need to define simulation parameters.
Step12: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
Step14: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections
Step15: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE
Step16: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE
Step17: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
Step18: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
Step19: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
Step20: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE
Step21: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
Step22: Time to run the calculation and get our results!
Step23: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
Step24: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
Step25: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step26: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
Step27: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
Step28: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
Step29: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
Step30: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
Step31: At this point, the problem is set up and we can run the multi-group calculation.
Step32: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
Step33: Next, we can load the continuous-energy eigenvalue for comparison.
Step34: Lets compare the two eigenvalues, including their bias
Step35: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
Step36: We can now do the same for the Continuous-Energy results.
Step37: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
Step38: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following
Step39: Now we can re-run OpenMC to obtain our results
Step40: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
Step41: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
Step42: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
Step43: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
Step44: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution | Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
import openmc
%matplotlib inline
Explanation: The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features:
Calculation of multi-group cross sections for a fuel assembly
Automated creation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin fission rates comparison between continuous-energy and multi-group OpenMC.
Modification of the scattering data in the library to show the flexibility of the multi-group solver
Generate Input Files
End of explanation
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_element('H', 4.9457e-2)
water.add_element('O', 2.4732e-2)
water.add_element('B', 8.0042e-6)
Explanation: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, zircaloy, water))
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
# The x0 and y0 parameters (0. and 0.) are the default values for an
# openmc.ZCylinder object. We could therefore leave them out to no effect
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:, :] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
root_universe.plot(origin=(0., 0., 0.), width=(21.42, 21.42), pixels=(500, 500), color_by='material')
Explanation: Before proceeding lets check the geometry.
End of explanation
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 600
inactive = 50
particles = 3000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.run_mode = 'eigenvalue'
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.source.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters.
End of explanation
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups([0., 0.625, 20.0e6])
Explanation: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
# Initialize a 2-group MGXS Library for OpenMC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
End of explanation
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
Explanation: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections: "total", "absorption", "nu-fission", '"fission", "nu-scatter matrix", "multiplicity matrix", and "chi".
The "multiplicity matrix" type is a relatively rare cross section type. This data is needed to provide OpenMC's multi-group mode with additional information needed to accurately treat scattering multiplication (i.e., (n,xn) reactions)), including how this multiplication varies depending on both incoming and outgoing neutron energies.
End of explanation
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = "material"
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_materials().values()
Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell, universe, or mesh) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property.
End of explanation
# Do not compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = False
Explanation: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE: The default value of the by_nuclide parameter is False, so the following step is not necessary but is included for illustrative purposes.
End of explanation
# Set the Legendre order to 3 for P3 scattering
mgxs_lib.legendre_order = 3
Explanation: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
End of explanation
# Check the library - if no errors are raised, then the library is satisfactory.
mgxs_lib.check_library_for_openmc_mgxs()
Explanation: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
End of explanation
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
Explanation: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
Explanation: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as O(N) for N tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge parameter (False by default) for the Library.add_to_tallies_file(...) method, as shown below.
End of explanation
# Instantiate a tally Mesh
mesh = openmc.Mesh()
mesh.type = 'regular'
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
Explanation: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Time to run the calculation and get our results!
End of explanation
# Move the statepoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
Explanation: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint(ce_spfile, autolink=False)
# Load the summary file in its new location
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Create a MGXS File which can then be written to disk
mgxs_file = mgxs_lib.create_mg_library(xs_type='macro', xsdata_names=['fuel', 'zircaloy', 'water'])
# Write the file to disk using the default filename of "mgxs.h5"
mgxs_file.export_to_hdf5()
Explanation: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
End of explanation
# Re-define our materials to use the multi-group macroscopic data
# instead of the continuous-energy data.
# 1.6% enriched fuel UO2
fuel_mg = openmc.Material(name='UO2', material_id=1)
fuel_mg.add_macroscopic('fuel')
# cladding
zircaloy_mg = openmc.Material(name='Clad', material_id=2)
zircaloy_mg.add_macroscopic('zircaloy')
# moderator
water_mg = openmc.Material(name='Water', material_id=3)
water_mg.add_macroscopic('water')
# Finally, instantiate our Materials object
materials_file = openmc.Materials((fuel_mg, zircaloy_mg, water_mg))
# Set the location of the cross sections file
materials_file.cross_sections = 'mgxs.h5'
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
End of explanation
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add fission and flux mesh to tally for plotting using the same mesh we've already defined
mesh_tally = openmc.Tally(name='mesh tally')
mesh_tally.filters = [openmc.MeshFilter(mesh)]
mesh_tally.scores = ['fission']
tallies_file.append(mesh_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
End of explanation
# First lets plot the fuel data
# We will first add the continuous-energy data
fig = openmc.plot_xs(fuel, ['total'])
# We will now add in the corresponding multi-group data and show the result
openmc.plot_xs(fuel_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# Then repeat for the zircaloy data
fig = openmc.plot_xs(zircaloy, ['total'])
openmc.plot_xs(zircaloy_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# And finally repeat for the water data
fig = openmc.plot_xs(water, ['total'])
openmc.plot_xs(water_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
Explanation: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: At this point, the problem is set up and we can run the multi-group calculation.
End of explanation
# Move the StatePoint File
mg_spfile = './statepoint_mg.h5'
os.rename('statepoint.' + str(batches) + '.h5', mg_spfile)
# Move the Summary file
mg_sumfile = './summary_mg.h5'
os.rename('summary.h5', mg_sumfile)
# Rename and then load the last statepoint file and keff value
mgsp = openmc.StatePoint(mg_spfile, autolink=False)
# Load the summary file in its new location
mgsu = openmc.Summary(mg_sumfile)
mgsp.link_with_summary(mgsu)
# Get keff
mg_keff = mgsp.k_combined
Explanation: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
End of explanation
ce_keff = sp.k_combined
Explanation: Next, we can load the continuous-energy eigenvalue for comparison.
End of explanation
bias = 1.0E5 * (ce_keff - mg_keff)
print('Continuous-Energy keff = {0:1.6f}'.format(ce_keff))
print('Multi-Group keff = {0:1.6f}'.format(mg_keff))
print('bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
Explanation: Lets compare the two eigenvalues, including their bias
End of explanation
# Get the OpenMC fission rate mesh tally data
mg_mesh_tally = mgsp.get_tally(name='mesh tally')
mg_fission_rates = mg_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
mg_fission_rates.shape = (17,17)
# Normalize to the average pin power
mg_fission_rates /= np.mean(mg_fission_rates[mg_fission_rates > 0.])
Explanation: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
End of explanation
# Get the OpenMC fission rate mesh tally data
ce_mesh_tally = sp.get_tally(name='mesh tally')
ce_fission_rates = ce_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
ce_fission_rates.shape = (17,17)
# Normalize to the average pin power
ce_fission_rates /= np.mean(ce_fission_rates[ce_fission_rates > 0.])
Explanation: We can now do the same for the Continuous-Energy results.
End of explanation
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
ce_fission_rates[ce_fission_rates == 0.] = np.nan
mg_fission_rates[mg_fission_rates == 0.] = np.nan
# Plot the CE fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(ce_fission_rates, interpolation='none', cmap='jet')
plt.title('Continuous-Energy Fission Rates')
# Plot the MG fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(mg_fission_rates, interpolation='none', cmap='jet')
plt.title('Multi-Group Fission Rates')
Explanation: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
End of explanation
# Set the maximum scattering order to 0 (i.e., isotropic scattering)
settings_file.max_order = 0
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following: a truncated Legendre series of up to the 10th order, a histogram distribution, and a tabular distribution. Any combination of these representations can be used by OpenMC during the transport process, so long as all constituents of a given material use the same representation. This means it is possible to have water represented by a tabular distribution and fuel represented by a Legendre if so desired.
Note: To have the highest runtime performance OpenMC natively converts Legendre series to a tabular distribution before the transport begins. This default functionality can be turned off with the tabular_legendre element of the settings.xml file (or for the Python API, the openmc.Settings.tabular_legendre attribute).
This section will examine the following:
- Re-run the MG-mode calculation with P0 scattering everywhere using the openmc.Settings.max_order attribute
- Re-run the problem with only the water represented with P3 scattering and P0 scattering for the remaining materials using the Python API's ability to convert between formats.
Global P0 Scattering
First we begin by re-running with P0 scattering (i.e., isotropic) everywhere. If a global maximum order is requested, the most effective way to do this is to use the max_order attribute of our openmc.Settings object.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Now we can re-run OpenMC to obtain our results
End of explanation
# Move the statepoint File
mgp0_spfile = './statepoint_mg_p0.h5'
os.rename('statepoint.' + str(batches) + '.h5', mgp0_spfile)
# Move the Summary file
mgp0_sumfile = './summary_mg_p0.h5'
os.rename('summary.h5', mgp0_sumfile)
# Load the last statepoint file and keff value
mgsp_p0 = openmc.StatePoint(mgp0_spfile, autolink=False)
# Get keff
mg_p0_keff = mgsp_p0.k_combined
bias_p0 = 1.0E5 * (ce_keff - mg_p0_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('P0 bias [pcm]: {0:1.1f}'.format(bias_p0.nominal_value))
Explanation: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
End of explanation
# Convert the zircaloy and fuel data to P0 scattering
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name != 'water':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('legendre', 0)
Explanation: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
End of explanation
# Convert the formats as discussed
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name == 'zircaloy':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('histogram', 2)
elif xsdata.name == 'fuel':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('tabular', 2)
mgxs_file.export_to_hdf5('mgxs.h5')
Explanation: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
End of explanation
settings_file.max_order = None
# Export to "settings.xml"
settings_file.export_to_xml()
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
End of explanation
# Load the last statepoint file and keff value
mgsp_mixed = openmc.StatePoint('./statepoint.' + str(batches) + '.h5')
mg_mixed_keff = mgsp_mixed.k_combined
bias_mixed = 1.0E5 * (ce_keff - mg_mixed_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('Mixed Scattering bias [pcm]: {0:1.1f}'.format(bias_mixed.nominal_value))
Explanation: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution
End of explanation |
11,853 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
Step1: Note
Step2: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
Step3: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards
Step4: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation
Step5: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
Step6: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent
Step7: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
Step8: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
Step9: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
Step10: Testing
Let's checkout how our trained agent plays the game. | Python Code:
import gym
import tensorflow as tf
import numpy as np
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
#Test
env.close()
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
print(rewards[-20:])
Explanation: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards:
End of explanation
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation |
11,854 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Classes
So far you have learned about Python's core data types
Step1: One of the first things you do with a class is to define the _init_() method. The __init__() method sets the values for any parameters that need to be defined when an object is first created. The self part will be explained later; basically, it's a syntax that allows you to access a variable from anywhere else in the class.
The Rocket class stores two pieces of information so far, but it can't do anything. The first behavior to define is a core behavior of a rocket
Step2: The Rocket class can now store some information, and it can do something. But this code has not actually created a rocket yet. Here is how you actually make a rocket
Step3: To actually use a class, you create a variable such as my_rocket. Then you set that equal to the name of the class, with an empty set of parentheses. Python creates an object from the class. An object is a single instance of the Rocket class; it has a copy of each of the class's variables, and it can do any action that is defined for the class. In this case, you can see that the variable my_rocket is a Rocket object from the __main__ program file, which is stored at a particular location in memory.
Once you have a class, you can define an object and use its methods. Here is how you might define a rocket and have it start to move up
Step4: To access an object's variables or methods, you give the name of the object and then use dot notation to access the variables and methods. So to get the y-value of my_rocket, you use my_rocket.y. To use the move_up() method on my_rocket, you write my_rocket.move_up().
Once you have a class defined, you can create as many objects from that class as you want. Each object is its own instance of that class, with its own separate variables. All of the objects are capable of the same behavior, but each object's particular actions do not affect any of the other objects. Here is how you might make a simple fleet of rockets
Step5: You can see that each rocket is at a separate place in memory. By the way, if you understand list comprehensions, you can make the fleet of rockets in one line
Step6: You can prove that each rocket has its own x and y values by moving just one of the rockets
Step7: The syntax for classes may not be very clear at this point, but consider for a moment how you might create a rocket without using classes. You might store the x and y values in a dictionary, but you would have to write a lot of ugly, hard-to-maintain code to manage even a small set of rockets. As more features become incorporated into the Rocket class, you will see how much more efficiently real-world objects can be modeled with classes than they could be using just lists and dictionaries.
top
Classes in Python 2.7
When you write a class in Python 2.7, you should always include the word object in parentheses when you define the class. This makes sure your Python 2.7 classes act like Python 3 classes, which will be helpful as your projects grow more complicated.
The simple version of the rocket class would look like this in Python 2.7
Step8: This syntax will work in Python 3 as well.
top
Exercises
Rocket With No Class
Using just what you already know, try to write a program that simulates the above example about rockets.
Store an x and y value for a rocket.
Store an x and y value for each rocket in a set of 5 rockets. Store these 5 rockets in a list.
Don't take this exercise too far; it's really just a quick exercise to help you understand how useful the class structure is, especially as you start to see more capability added to the Rocket class.
top
Object-Oriented terminology
Classes are part of a programming paradigm called object-oriented programming. Object-oriented programming, or OOP for short, focuses on building reusable blocks of code called classes. When you want to use a class in one of your programs, you make an object from that class, which is where the phrase "object-oriented" comes from. Python itself is not tied to object-oriented programming, but you will be using objects in most or all of your Python projects. In order to understand classes, you have to understand some of the language that is used in OOP.
General terminology
A class is a body of code that defines the attributes and behaviors required to accurately model something you need for your program. You can model something from the real world, such as a rocket ship or a guitar string, or you can model something from a virtual world such as a rocket in a game, or a set of physical laws for a game engine.
An attribute is a piece of information. In code, an attribute is just a variable that is part of a class.
A behavior is an action that is defined within a class. These are made up of methods, which are just functions that are defined for the class.
An object is a particular instance of a class. An object has a certain set of values for all of the attributes (variables) in the class. You can have as many objects as you want for any one class.
There is much more to know, but these words will help you get started. They will make more sense as you see more examples, and start to use classes on your own.
top
A closer look at the Rocket class
Now that you have seen a simple example of a class, and have learned some basic OOP terminology, it will be helpful to take a closer look at the Rocket class.
The __init__() method
Here is the initial code block that defined the Rocket class
Step9: The first line shows how a class is created in Python. The keyword class tells Python that you are about to define a class. The rules for naming a class are the same rules you learned about naming variables, but there is a strong convention among Python programmers that classes should be named using CamelCase. If you are unfamiliar with CamelCase, it is a convention where each letter that starts a word is capitalized, with no underscores in the name. The name of the class is followed by a set of parentheses. These parentheses will be empty for now, but later they may contain a class upon which the new class is based.
It is good practice to write a comment at the beginning of your class, describing the class. There is a more formal syntax for documenting your classes, but you can wait a little bit to get that formal. For now, just write a comment at the beginning of your class summarizing what you intend the class to do. Writing more formal documentation for your classes will be easy later if you start by writing simple comments now.
Function names that start and end with two underscores are special built-in functions that Python uses in certain ways. The __init()__ method is one of these special functions. It is called automatically when you create an object from your class. The __init()__ method lets you make sure that all relevant attributes are set to their proper values when an object is created from the class, before the object is used. In this case, The __init__() method initializes the x and y values of the Rocket to 0.
The self keyword often takes people a little while to understand. The word "self" refers to the current object that you are working with. When you are writing a class, it lets you refer to certain attributes from any other part of the class. Basically, all methods in a class need the self object as their first argument, so they can access any attribute that is part of the class.
Now let's take a closer look at a method.
A simple method
Here is the method that was defined for the Rocket class
Step10: A method is just a function that is part of a class. Since it is just a function, you can do anything with a method that you learned about with functions. You can accept positional arguments, keyword arguments, an arbitrary list of argument values, an arbitrary dictionary of arguments, or any combination of these. Your arguments can return a value or a set of values if you want, or they can just do some work without returning any values.
Each method has to accept one argument by default, the value self. This is a reference to the particular object that is calling the method. This self argument gives you access to the calling object's attributes. In this example, the self argument is used to access a Rocket object's y-value. That value is increased by 1, every time the method move_up() is called by a particular Rocket object. This is probably still somewhat confusing, but it should start to make sense as you work through your own examples.
If you take a second look at what happens when a method is called, things might make a little more sense
Step11: In this example, a Rocket object is created and stored in the variable my_rocket. After this object is created, its y value is printed. The value of the attribute y is accessed using dot notation. The phrase my_rocket.y asks Python to return "the value of the variable y attached to the object my_rocket".
After the object my_rocket is created and its initial y-value is printed, the method move_up() is called. This tells Python to apply the method move_up() to the object my_rocket. Python finds the y-value associated with my_rocket and adds 1 to that value. This process is repeated several times, and you can see from the output that the y-value is in fact increasing.
top
Making multiple objects from a class
One of the goals of object-oriented programming is to create reusable code. Once you have written the code for a class, you can create as many objects from that class as you need. It is worth mentioning at this point that classes are usually saved in a separate file, and then imported into the program you are working on. So you can build a library of classes, and use those classes over and over again in different programs. Once you know a class works well, you can leave it alone and know that the objects you create in a new program are going to work as they always have.
You can see this "code reusability" already when the Rocket class is used to make more than one Rocket object. Here is the code that made a fleet of Rocket objects
Step12: If you are comfortable using list comprehensions, go ahead and use those as much as you can. I'd rather not assume at this point that everyone is comfortable with comprehensions, so I will use the slightly longer approach of declaring an empty list, and then using a for loop to fill that list. That can be done slightly more efficiently than the previous example, by eliminating the temporary variable new_rocket
Step13: What exactly happens in this for loop? The line my_rockets.append(Rocket()) is executed 5 times. Each time, a new Rocket object is created and then added to the list my_rockets. The __init__() method is executed once for each of these objects, so each object gets its own x and y value. When a method is called on one of these objects, the self variable allows access to just that object's attributes, and ensures that modifying one object does not affect any of the other objecs that have been created from the class.
Each of these objects can be worked with individually. At this point we are ready to move on and see how to add more functionality to the Rocket class. We will work slowly, and give you the chance to start writing your own simple classes.
A quick check-in
If all of this makes sense, then the rest of your work with classes will involve learning a lot of details about how classes can be used in more flexible and powerful ways. If this does not make any sense, you could try a few different things
Step14: All the __init__() method does so far is set the x and y values for the rocket to 0. We can easily add a couple keyword arguments so that new rockets can be initialized at any position
Step15: Now when you create a new Rocket object you have the choice of passing in arbitrary initial values for x and y
Step16: top
Accepting parameters in a method
The __init__ method is just a special method that serves a particular purpose, which is to help create new objects from a class. Any method in a class can accept parameters of any kind. With this in mind, the move_up() method can be made much more flexible. By accepting keyword arguments, the move_up() method can be rewritten as a more general move_rocket() method. This new method will allow the rocket to be moved any amount, in any direction
Step17: The paremeters for the move() method are named x_increment and y_increment rather than x and y. It's good to emphasize that these are changes in the x and y position, not new values for the actual position of the rocket. By carefully choosing the right default values, we can define a meaningful default behavior. If someone calls the method move_rocket() with no parameters, the rocket will simply move up one unit in the y-direciton. Note that this method can be given negative values to move the rocket left or right
Step18: top
Adding a new method
One of the strengths of object-oriented programming is the ability to closely model real-world phenomena by adding appropriate attributes and behaviors to classes. One of the jobs of a team piloting a rocket is to make sure the rocket does not get too close to any other rockets. Let's add a method that will report the distance from one rocket to any other rocket.
If you are not familiar with distance calculations, there is a fairly simple formula to tell the distance between two points if you know the x and y values of each point. This new method performs that calculation, and then returns the resulting distance.
Step19: Hopefully these short refinements show that you can extend a class' attributes and behavior to model the phenomena you are interested in as closely as you want. The rocket could have a name, a crew capacity, a payload, a certain amount of fuel, and any number of other attributes. You could define any behavior you want for the rocket, including interactions with other rockets and launch facilities, gravitational fields, and whatever you need it to! There are techniques for managing these more complex interactions, but what you have just seen is the core of object-oriented programming.
At this point you should try your hand at writing some classes of your own. After trying some exercises, we will look at object inheritance, and then you will be ready to move on for now.
top
<a id="Exercises-refining"></a>
Exercises
Your Own Rocket 2
There are enough new concepts here that you might want to try re-creating the Rocket class as it has been developed so far, looking at the examples as little as possible. Once you have your own version, regardless of how much you needed to look at the example, you can modify the class and explore the possibilities of what you have already learned.
Re-create the Rocket class as it has been developed so far
Step20: When a new class is based on an existing class, you write the name of the parent class in parentheses when you define the new class
Step21: The __init__() function of the new class needs to call the __init__() function of the parent class. The __init__() function of the new class needs to accept all of the parameters required to build an object from the parent class, and these parameters need to be passed to the __init__() function of the parent class. The super().__init__() function takes care of this
Step22: The super() function passes the self argument to the parent class automatically. You could also do this by explicitly naming the parent class when you call the __init__() function, but you then have to include the self argument manually
Step23: This might seem a little easier to read, but it is preferable to use the super() syntax. When you use super(), you don't need to explicitly name the parent class, so your code is more resilient to later changes. As you learn more about classes, you will be able to write child classes that inherit from multiple parent classes, and the super() function will call the parent classes' __init__() functions for you, in one line. This explicit approach to calling the parent class' __init__() function is included so that you will be less confused if you see it in someone else's code.
The output above shows that a new Shuttle object was created. This new Shuttle object can store the number of flights completed, but it also has all of the functionality of the Rocket class
Step24: Inheritance is a powerful feature of object-oriented programming. Using just what you have seen so far about classes, you can model an incredible variety of real-world and virtual phenomena with a high degree of accuracy. The code you write has the potential to be stable and reusable in a variety of applications.
top
Inheritance in Python 2.7
The super() method has a slightly different syntax in Python 2.7
Step25: Notice that you have to explicitly pass the arguments NewClass and self when you call super() in Python 2.7. The SpaceShuttle class would look like this
Step26: This syntax works in Python 3 as well.
top
<a id="Exercises-inheritance"></a>
Exercises
Student Class
Start with your program from Person Class.
Make a new class called Student that inherits from Person.
Define some attributes that a student has, which other people don't have.
A student has a school they are associated with, a graduation year, a gpa, and other particular attributes.
Create a Student object, and prove that you have used inheritance correctly.
Set some attribute values for the student, that are only coded in the Person class.
Set some attribute values for the student, that are only coded in the Student class.
Print the values for all of these attributes.
Refining Shuttle
Take the latest version of the Shuttle class. Extend it.
Add more attributes that are particular to shuttles such as maximum number of flights, capability of supporting spacewalks, and capability of docking with the ISS.
Add one more method to the class, that relates to shuttle behavior. This method could simply print a statement, such as "Docking with the ISS," for a dock_ISS() method.
Prove that your refinements work by creating a Shuttle object with these attributes, and then call your new method.
top
Modules and classes
Now that you are starting to work with classes, your files are going to grow longer. This is good, because it means your programs are probably doing more interesting things. But it is bad, because longer files can be more difficult to work with. Python allows you to save your classes in another file and then import them into the program you are working on. This has the added advantage of isolating your classes into files that can be used in any number of different programs. As you use your classes repeatedly, the classes become more reliable and complete overall.
Storing a single class in a module
When you save a class into a separate file, that file is called a module. You can have any number of classes in a single module. There are a number of ways you can then import the class you are interested in.
Start out by saving just the Rocket class into a file called rocket.py. Notice the naming convention being used here
Step27: Make a separate file called rocket_game.py. If you are more interested in science than games, feel free to call this file something like rocket_simulation.py. Again, to use standard naming conventions, make sure you are using a lowercase_underscore name for this file.
Step28: This is a really clean and uncluttered file. A rocket is now something you can define in your programs, without the details of the rocket's implementation cluttering up your file. You don't have to include all the class code for a rocket in each of your files that deals with rockets; the code defining rocket attributes and behavior lives in one file, and can be used anywhere.
The first line tells Python to look for a file called rocket.py. It looks for that file in the same directory as your current program. You can put your classes in other directories, but we will get to that convention a bit later. Notice that you do not
When Python finds the file rocket.py, it looks for a class called Rocket. When it finds that class, it imports that code into the current file, without you ever seeing that code. You are then free to use the class Rocket as you have seen it used in previous examples.
top
Storing multiple classes in a module
A module is simply a file that contains one or more classes or functions, so the Shuttle class actually belongs in the rocket module as well
Step29: Now you can import the Rocket and the Shuttle class, and use them both in a clean uncluttered program file
Step30: The first line tells Python to import both the Rocket and the Shuttle classes from the rocket module. You don't have to import every class in a module; you can pick and choose the classes you care to use, and Python will only spend time processing those particular classes.
A number of ways to import modules and classes
There are several ways to import modules and classes, and each has its own merits.
import module_name
The syntax for importing classes that was just shown
Step31: is straightforward, and is used quite commonly. It allows you to use the class names directly in your program, so you have very clean and readable code. This can be a problem, however, if the names of the classes you are importing conflict with names that have already been used in the program you are working on. This is unlikely to happen in the short programs you have been seeing here, but if you were working on a larger program it is quite possible that the class you want to import from someone else's work would happen to have a name you have already used in your program. In this case, you can use simply import the module itself
Step32: The general syntax for this kind of import is
Step33: After this, classes are accessed using dot notation
Step34: This prevents some name conflicts. If you were reading carefully however, you might have noticed that the variable name rocket in the previous example had to be changed because it has the same name as the module itself. This is not good, because in a longer program that could mean a lot of renaming.
import module_name as local_module_name
There is another syntax for imports that is quite useful
Step35: When you are importing a module into one of your projects, you are free to choose any name you want for the module in your project. So the last example could be rewritten in a way that the variable name rocket would not need to be changed
Step36: This approach is often used to shorten the name of the module, so you don't have to type a long module name before each class name that you want to use. But it is easy to shorten a name so much that you force people reading your code to scroll to the top of your file and see what the shortened name stands for. In this example,
Step37: leads to much more readable code than something like
Step38: from module_name import *
There is one more import syntax that you should be aware of, but you should probably avoid using. This syntax imports all of the available classes and functions in a module
Step39: This is not recommended, for a couple reasons. First of all, you may have no idea what all the names of the classes and functions in a module are. If you accidentally give one of your variables the same name as a name from the module, you will have naming conflicts. Also, you may be importing way more code into your program than you need.
If you really need all the functions and classes from a module, just import the module and use the module_name.ClassName syntax in your program.
You will get a sense of how to write your imports as you read more Python code, and as you write and share some of your own code.
top
A module of functions
You can use modules to store a set of functions you want available in different programs as well, even if those functions are not attached to any one class. To do this, you save the functions into a file, and then import that file just as you saw in the last section. Here is a really simple example; save this is multiplying.py
Step40: Now you can import the file multiplying.py, and use these functions. Using the from module_name import function_name syntax
Step41: Using the import module_name syntax
Step42: Using the import module_name as local_module_name syntax
Step43: Using the from module_name import * syntax
Step44: top
<a id="Exercises-importing"></a>
Exercises
Importing Student
Take your program from Student Class
Save your Person and Student classes in a separate file called person.py.
Save the code that uses these classes in four separate files.
In the first file, use the from module_name import ClassName syntax to make your program run.
In the second file, use the import module_name syntax.
In the third file, use the import module_name as different_local_module_name syntax.
In the fourth file, use the import * syntax.
Importing Car
Take your program from Car Class
Save your Car class in a separate file called car.py.
Save the code that uses the car class into four separate files.
In the first file, use the from module_name import ClassName syntax to make your program run.
In the second file, use the import module_name syntax.
In the third file, use the import module_name as different_local_module_name syntax.
In the fourth file, use the import * syntax.
top
Revisiting PEP 8
If you recall, PEP 8 is the style guide for writing Python code. PEP 8 has a little to say about writing classes and using import statements, that was not covered previously. Following these guidelines will help make your code readable to other Python programmers, and it will help you make more sense of the Python code you read.
Import statements
PEP8 provides clear guidelines about where import statements should appear in a file. The names of modules should be on separate lines
Step45: The names of classes can be on the same line | Python Code:
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
Explanation: Classes
So far you have learned about Python's core data types: strings, numbers, lists, tuples, and dictionaries. In this section you will learn about the last major data structure, classes. Classes are quite unlike the other data types, in that they are much more flexible. Classes allow you to define the information and behavior that characterize anything you want to model in your program. Classes are a rich topic, so you will learn just enough here to dive into the projects you'd like to get started on.
There is a lot of new language that comes into play when you start learning about classes. If you are familiar with object-oriented programming from your work in another language, this will be a quick read about how Python approaches OOP. If you are new to programming in general, there will be a lot of new ideas here. Just start reading, try out the examples on your own machine, and trust that it will start to make sense as you work your way through the examples and exercises.
Previous: More Functions |
Home
Contents
What are classes?
Object-Oriented Terminology
General terminology
A closer look at the Rocket class
The __init__() method
A simple method
Making multiple objects from a class
A quick check-in
Classes in Python 2.7
Exercises
Refining the Rocket class
Accepting parameters for the __init__() method
Accepting parameters in a method
Adding a new method
Exercises
Inheritance
The SpaceShuttle class
Inheritance in Python 2.7
Exercises
Modules and classes
Storing a single class in a module
Storing multiple classes in a module
A number of ways to import modules and classes
A module of functions
Exercises
Revisiting PEP 8
Imports
Module and class names
Exercises
top
What are classes?
Classes are a way of combining information and behavior. For example, let's consider what you'd need to do if you were creating a rocket ship in a game, or in a physics simulation. One of the first things you'd want to track are the x and y coordinates of the rocket. Here is what a simple rocket ship class looks like in code:
End of explanation
###highlight=[11,12,13]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
Explanation: One of the first things you do with a class is to define the _init_() method. The __init__() method sets the values for any parameters that need to be defined when an object is first created. The self part will be explained later; basically, it's a syntax that allows you to access a variable from anywhere else in the class.
The Rocket class stores two pieces of information so far, but it can't do anything. The first behavior to define is a core behavior of a rocket: moving up. Here is what that might look like in code:
End of explanation
###highlight=[15,16, 17]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a Rocket object.
my_rocket = Rocket()
print(my_rocket)
Explanation: The Rocket class can now store some information, and it can do something. But this code has not actually created a rocket yet. Here is how you actually make a rocket:
End of explanation
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a Rocket object, and have it start to move up.
my_rocket = Rocket()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
Explanation: To actually use a class, you create a variable such as my_rocket. Then you set that equal to the name of the class, with an empty set of parentheses. Python creates an object from the class. An object is a single instance of the Rocket class; it has a copy of each of the class's variables, and it can do any action that is defined for the class. In this case, you can see that the variable my_rocket is a Rocket object from the __main__ program file, which is stored at a particular location in memory.
Once you have a class, you can define an object and use its methods. Here is how you might define a rocket and have it start to move up:
End of explanation
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = []
for x in range(0,5):
new_rocket = Rocket()
my_rockets.append(new_rocket)
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
Explanation: To access an object's variables or methods, you give the name of the object and then use dot notation to access the variables and methods. So to get the y-value of my_rocket, you use my_rocket.y. To use the move_up() method on my_rocket, you write my_rocket.move_up().
Once you have a class defined, you can create as many objects from that class as you want. Each object is its own instance of that class, with its own separate variables. All of the objects are capable of the same behavior, but each object's particular actions do not affect any of the other objects. Here is how you might make a simple fleet of rockets:
End of explanation
###highlight=[16]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = [Rocket() for x in range(0,5)]
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
Explanation: You can see that each rocket is at a separate place in memory. By the way, if you understand list comprehensions, you can make the fleet of rockets in one line:
End of explanation
###highlight=[18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = [Rocket() for x in range(0,5)]
# Move the first rocket up.
my_rockets[0].move_up()
# Show that only the first rocket has moved.
for rocket in my_rockets:
print("Rocket altitude:", rocket.y)
Explanation: You can prove that each rocket has its own x and y values by moving just one of the rockets:
End of explanation
###highlight=[2]
class Rocket(object):
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
Explanation: The syntax for classes may not be very clear at this point, but consider for a moment how you might create a rocket without using classes. You might store the x and y values in a dictionary, but you would have to write a lot of ugly, hard-to-maintain code to manage even a small set of rockets. As more features become incorporated into the Rocket class, you will see how much more efficiently real-world objects can be modeled with classes than they could be using just lists and dictionaries.
top
Classes in Python 2.7
When you write a class in Python 2.7, you should always include the word object in parentheses when you define the class. This makes sure your Python 2.7 classes act like Python 3 classes, which will be helpful as your projects grow more complicated.
The simple version of the rocket class would look like this in Python 2.7:
End of explanation
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
Explanation: This syntax will work in Python 3 as well.
top
Exercises
Rocket With No Class
Using just what you already know, try to write a program that simulates the above example about rockets.
Store an x and y value for a rocket.
Store an x and y value for each rocket in a set of 5 rockets. Store these 5 rockets in a list.
Don't take this exercise too far; it's really just a quick exercise to help you understand how useful the class structure is, especially as you start to see more capability added to the Rocket class.
top
Object-Oriented terminology
Classes are part of a programming paradigm called object-oriented programming. Object-oriented programming, or OOP for short, focuses on building reusable blocks of code called classes. When you want to use a class in one of your programs, you make an object from that class, which is where the phrase "object-oriented" comes from. Python itself is not tied to object-oriented programming, but you will be using objects in most or all of your Python projects. In order to understand classes, you have to understand some of the language that is used in OOP.
General terminology
A class is a body of code that defines the attributes and behaviors required to accurately model something you need for your program. You can model something from the real world, such as a rocket ship or a guitar string, or you can model something from a virtual world such as a rocket in a game, or a set of physical laws for a game engine.
An attribute is a piece of information. In code, an attribute is just a variable that is part of a class.
A behavior is an action that is defined within a class. These are made up of methods, which are just functions that are defined for the class.
An object is a particular instance of a class. An object has a certain set of values for all of the attributes (variables) in the class. You can have as many objects as you want for any one class.
There is much more to know, but these words will help you get started. They will make more sense as you see more examples, and start to use classes on your own.
top
A closer look at the Rocket class
Now that you have seen a simple example of a class, and have learned some basic OOP terminology, it will be helpful to take a closer look at the Rocket class.
The __init__() method
Here is the initial code block that defined the Rocket class:
End of explanation
###highlight=[11,12,13]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
Explanation: The first line shows how a class is created in Python. The keyword class tells Python that you are about to define a class. The rules for naming a class are the same rules you learned about naming variables, but there is a strong convention among Python programmers that classes should be named using CamelCase. If you are unfamiliar with CamelCase, it is a convention where each letter that starts a word is capitalized, with no underscores in the name. The name of the class is followed by a set of parentheses. These parentheses will be empty for now, but later they may contain a class upon which the new class is based.
It is good practice to write a comment at the beginning of your class, describing the class. There is a more formal syntax for documenting your classes, but you can wait a little bit to get that formal. For now, just write a comment at the beginning of your class summarizing what you intend the class to do. Writing more formal documentation for your classes will be easy later if you start by writing simple comments now.
Function names that start and end with two underscores are special built-in functions that Python uses in certain ways. The __init()__ method is one of these special functions. It is called automatically when you create an object from your class. The __init()__ method lets you make sure that all relevant attributes are set to their proper values when an object is created from the class, before the object is used. In this case, The __init__() method initializes the x and y values of the Rocket to 0.
The self keyword often takes people a little while to understand. The word "self" refers to the current object that you are working with. When you are writing a class, it lets you refer to certain attributes from any other part of the class. Basically, all methods in a class need the self object as their first argument, so they can access any attribute that is part of the class.
Now let's take a closer look at a method.
A simple method
Here is the method that was defined for the Rocket class:
End of explanation
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a Rocket object, and have it start to move up.
my_rocket = Rocket()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
my_rocket.move_up()
print("Rocket altitude:", my_rocket.y)
Explanation: A method is just a function that is part of a class. Since it is just a function, you can do anything with a method that you learned about with functions. You can accept positional arguments, keyword arguments, an arbitrary list of argument values, an arbitrary dictionary of arguments, or any combination of these. Your arguments can return a value or a set of values if you want, or they can just do some work without returning any values.
Each method has to accept one argument by default, the value self. This is a reference to the particular object that is calling the method. This self argument gives you access to the calling object's attributes. In this example, the self argument is used to access a Rocket object's y-value. That value is increased by 1, every time the method move_up() is called by a particular Rocket object. This is probably still somewhat confusing, but it should start to make sense as you work through your own examples.
If you take a second look at what happens when a method is called, things might make a little more sense:
End of explanation
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = []
for x in range(0,5):
new_rocket = Rocket()
my_rockets.append(new_rocket)
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
Explanation: In this example, a Rocket object is created and stored in the variable my_rocket. After this object is created, its y value is printed. The value of the attribute y is accessed using dot notation. The phrase my_rocket.y asks Python to return "the value of the variable y attached to the object my_rocket".
After the object my_rocket is created and its initial y-value is printed, the method move_up() is called. This tells Python to apply the method move_up() to the object my_rocket. Python finds the y-value associated with my_rocket and adds 1 to that value. This process is repeated several times, and you can see from the output that the y-value is in fact increasing.
top
Making multiple objects from a class
One of the goals of object-oriented programming is to create reusable code. Once you have written the code for a class, you can create as many objects from that class as you need. It is worth mentioning at this point that classes are usually saved in a separate file, and then imported into the program you are working on. So you can build a library of classes, and use those classes over and over again in different programs. Once you know a class works well, you can leave it alone and know that the objects you create in a new program are going to work as they always have.
You can see this "code reusability" already when the Rocket class is used to make more than one Rocket object. Here is the code that made a fleet of Rocket objects:
End of explanation
###highlight=[15,16,17,18]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Create a fleet of 5 rockets, and store them in a list.
my_rockets = []
for x in range(0,5):
my_rockets.append(Rocket())
# Show that each rocket is a separate object.
for rocket in my_rockets:
print(rocket)
Explanation: If you are comfortable using list comprehensions, go ahead and use those as much as you can. I'd rather not assume at this point that everyone is comfortable with comprehensions, so I will use the slightly longer approach of declaring an empty list, and then using a for loop to fill that list. That can be done slightly more efficiently than the previous example, by eliminating the temporary variable new_rocket:
End of explanation
###highlight=[6,7,8,9]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self):
# Each rocket has an (x,y) position.
self.x = 0
self.y = 0
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
Explanation: What exactly happens in this for loop? The line my_rockets.append(Rocket()) is executed 5 times. Each time, a new Rocket object is created and then added to the list my_rockets. The __init__() method is executed once for each of these objects, so each object gets its own x and y value. When a method is called on one of these objects, the self variable allows access to just that object's attributes, and ensures that modifying one object does not affect any of the other objecs that have been created from the class.
Each of these objects can be worked with individually. At this point we are ready to move on and see how to add more functionality to the Rocket class. We will work slowly, and give you the chance to start writing your own simple classes.
A quick check-in
If all of this makes sense, then the rest of your work with classes will involve learning a lot of details about how classes can be used in more flexible and powerful ways. If this does not make any sense, you could try a few different things:
Reread the previous sections, and see if things start to make any more sense.
Type out these examples in your own editor, and run them. Try making some changes, and see what happens.
Try the next exercise, and see if it helps solidify some of the concepts you have been reading about.
Read on. The next sections are going to add more functionality to the Rocket class. These steps will involve rehashing some of what has already been covered, in a slightly different way.
Classes are a huge topic, and once you understand them you will probably use them for the rest of your life as a programmer. If you are brand new to this, be patient and trust that things will start to sink in.
top
<a id="Exercises-oop"></a>
Exercises
Your Own Rocket
Without looking back at the previous examples, try to recreate the Rocket class as it has been shown so far.
Define the Rocket() class.
Define the __init__() method, which sets an x and a y value for each Rocket object.
Define the move_up() method.
Create a Rocket object.
Print the object.
Print the object's y-value.
Move the rocket up, and print its y-value again.
Create a fleet of rockets, and prove that they are indeed separate Rocket objects.
top
Refining the Rocket class
The Rocket class so far is very simple. It can be made a little more interesting with some refinements to the __init__() method, and by the addition of some methods.
Accepting parameters for the __init__() method
The __init__() method is run automatically one time when you create a new object from a class. The __init__() method for the Rocket class so far is pretty simple:
End of explanation
###highlight=[6,7,8,9]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
Explanation: All the __init__() method does so far is set the x and y values for the rocket to 0. We can easily add a couple keyword arguments so that new rockets can be initialized at any position:
End of explanation
###highlight=[15,16,17,18,19,20,21,22,23]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_up(self):
# Increment the y-position of the rocket.
self.y += 1
# Make a series of rockets at different starting places.
rockets = []
rockets.append(Rocket())
rockets.append(Rocket(0,10))
rockets.append(Rocket(100,0))
# Show where each rocket is.
for index, rocket in enumerate(rockets):
print("Rocket %d is at (%d, %d)." % (index, rocket.x, rocket.y))
Explanation: Now when you create a new Rocket object you have the choice of passing in arbitrary initial values for x and y:
End of explanation
###highlight=[11,12,13,14,15]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
Explanation: top
Accepting parameters in a method
The __init__ method is just a special method that serves a particular purpose, which is to help create new objects from a class. Any method in a class can accept parameters of any kind. With this in mind, the move_up() method can be made much more flexible. By accepting keyword arguments, the move_up() method can be rewritten as a more general move_rocket() method. This new method will allow the rocket to be moved any amount, in any direction:
End of explanation
###highlight=[17,18,19,20,21,22,23,24,25,26,27]
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
# Create three rockets.
rockets = [Rocket() for x in range(0,3)]
# Move each rocket a different amount.
rockets[0].move_rocket()
rockets[1].move_rocket(10,10)
rockets[2].move_rocket(-10,0)
# Show where each rocket is.
for index, rocket in enumerate(rockets):
print("Rocket %d is at (%d, %d)." % (index, rocket.x, rocket.y))
Explanation: The paremeters for the move() method are named x_increment and y_increment rather than x and y. It's good to emphasize that these are changes in the x and y position, not new values for the actual position of the rocket. By carefully choosing the right default values, we can define a meaningful default behavior. If someone calls the method move_rocket() with no parameters, the rocket will simply move up one unit in the y-direciton. Note that this method can be given negative values to move the rocket left or right:
End of explanation
###highlight=[19,20,21,22,23,24,25,26,27,28,29,30,31]
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
# Make two rockets, at different places.
rocket_0 = Rocket()
rocket_1 = Rocket(10,5)
# Show the distance between them.
distance = rocket_0.get_distance(rocket_1)
print("The rockets are %f units apart." % distance)
Explanation: top
Adding a new method
One of the strengths of object-oriented programming is the ability to closely model real-world phenomena by adding appropriate attributes and behaviors to classes. One of the jobs of a team piloting a rocket is to make sure the rocket does not get too close to any other rockets. Let's add a method that will report the distance from one rocket to any other rocket.
If you are not familiar with distance calculations, there is a fairly simple formula to tell the distance between two points if you know the x and y values of each point. This new method performs that calculation, and then returns the resulting distance.
End of explanation
###highlight=[25,26,27,28,29,30,31,32,33,34]
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super().__init__(x, y)
self.flights_completed = flights_completed
shuttle = Shuttle(10,0,3)
print(shuttle)
Explanation: Hopefully these short refinements show that you can extend a class' attributes and behavior to model the phenomena you are interested in as closely as you want. The rocket could have a name, a crew capacity, a payload, a certain amount of fuel, and any number of other attributes. You could define any behavior you want for the rocket, including interactions with other rockets and launch facilities, gravitational fields, and whatever you need it to! There are techniques for managing these more complex interactions, but what you have just seen is the core of object-oriented programming.
At this point you should try your hand at writing some classes of your own. After trying some exercises, we will look at object inheritance, and then you will be ready to move on for now.
top
<a id="Exercises-refining"></a>
Exercises
Your Own Rocket 2
There are enough new concepts here that you might want to try re-creating the Rocket class as it has been developed so far, looking at the examples as little as possible. Once you have your own version, regardless of how much you needed to look at the example, you can modify the class and explore the possibilities of what you have already learned.
Re-create the Rocket class as it has been developed so far:
Define the Rocket() class.
Define the __init__() method. Let your __init__() method accept x and y values for the initial position of the rocket. Make sure the default behavior is to position the rocket at (0,0).
Define the move_rocket() method. The method should accept an amount to move left or right, and an amount to move up or down.
Create a Rocket object. Move the rocket around, printing its position after each move.
Create a small fleet of rockets. Move several of them around, and print their final positions to prove that each rocket can move independently of the other rockets.
Define the get_distance() method. The method should accept a Rocket object, and calculate the distance between the current rocket and the rocket that is passed into the method.
Use the get_distance() method to print the distances between several of the rockets in your fleet.
Rocket Attributes
Start with a copy of the Rocket class, either one you made from a previous exercise or the latest version from the last section.
Add several of your own attributes to the __init__() function. The values of your attributes can be set automatically by the __init__ function, or they can be set by paremeters passed into __init__().
Create a rocket and print the values for the attributes you have created, to show they have been set correctly.
Create a small fleet of rockets, and set different values for one of the attributes you have created. Print the values of these attributes for each rocket in your fleet, to show that they have been set properly for each rocket.
If you are not sure what kind of attributes to add, you could consider storing the height of the rocket, the crew size, the name of the rocket, the speed of the rocket, or many other possible characteristics of a rocket.
Rocket Methods
Start with a copy of the Rocket class, either one you made from a previous exercise or the latest version from the last section.
Add a new method to the class. This is probably a little more challenging than adding attributes, but give it a try.
Think of what rockets do, and make a very simple version of that behavior using print statements. For example, rockets lift off when they are launched. You could make a method called launch(), and all it would do is print a statement such as "The rocket has lifted off!" If your rocket has a name, this sentence could be more descriptive.
You could make a very simple land_rocket() method that simply sets the x and y values of the rocket back to 0. Print the position before and after calling the land_rocket() method to make sure your method is doing what it's supposed to.
If you enjoy working with math, you could implement a safety_check() method. This method would take in another rocket object, and call the get_distance() method on that rocket. Then it would check if that rocket is too close, and print a warning message if the rocket is too close. If there is zero distance between the two rockets, your method could print a message such as, "The rockets have crashed!" (Be careful; getting a zero distance could mean that you accidentally found the distance between a rocket and itself, rather than a second rocket.)
Person Class
Modeling a person is a classic exercise for people who are trying to learn how to write classes. We are all familiar with characteristics and behaviors of people, so it is a good exercise to try.
Define a Person() class.
In the __init()__ function, define several attributes of a person. Good attributes to consider are name, age, place of birth, and anything else you like to know about the people in your life.
Write one method. This could be as simple as introduce_yourself(). This method would print out a statement such as, "Hello, my name is Eric."
You could also make a method such as age_person(). A simple version of this method would just add 1 to the person's age.
A more complicated version of this method would involve storing the person's birthdate rather than their age, and then calculating the age whenever the age is requested. But dealing with dates and times is not particularly easy if you've never done it in any other programming language before.
Create a person, set the attribute values appropriately, and print out information about the person.
Call your method on the person you created. Make sure your method executed properly; if the method does not print anything out directly, print something before and after calling the method to make sure it did what it was supposed to.
Car Class
Modeling a car is another classic exercise.
Define a Car() class.
In the __init__() function, define several attributes of a car. Some good attributes to consider are make (Subaru, Audi, Volvo...), model (Outback, allroad, C30), year, num_doors, owner, or any other aspect of a car you care to include in your class.
Write one method. This could be something such as describe_car(). This method could print a series of statements that describe the car, using the information that is stored in the attributes. You could also write a method that adjusts the mileage of the car or tracks its position.
Create a car object, and use your method.
Create several car objects with different values for the attributes. Use your method on several of your cars.
top
Inheritance
One of the most important goals of the object-oriented approach to programming is the creation of stable, reliable, reusable code. If you had to create a new class for every kind of object you wanted to model, you would hardly have any reusable code. In Python and any other language that supports OOP, one class can inherit from another class. This means you can base a new class on an existing class; the new class inherits all of the attributes and behavior of the class it is based on. A new class can override any undesirable attributes or behavior of the class it inherits from, and it can add any new attributes or behavior that are appropriate. The original class is called the parent class, and the new class is a child of the parent class. The parent class is also called a superclass, and the child class is also called a subclass.
The child class inherits all attributes and behavior from the parent class, but any attributes that are defined in the child class are not available to the parent class. This may be obvious to many people, but it is worth stating. This also means a child class can override behavior of the parent class. If a child class defines a method that also appears in the parent class, objects of the child class will use the new method rather than the parent class method.
To better understand inheritance, let's look at an example of a class that can be based on the Rocket class.
The SpaceShuttle class
If you wanted to model a space shuttle, you could write an entirely new class. But a space shuttle is just a special kind of rocket. Instead of writing an entirely new class, you can inherit all of the attributes and behavior of a Rocket, and then add a few appropriate attributes and behavior for a Shuttle.
One of the most significant characteristics of a space shuttle is that it can be reused. So the only difference we will add at this point is to record the number of flights the shutttle has completed. Everything else you need to know about a shuttle has already been coded into the Rocket class.
Here is what the Shuttle class looks like:
End of explanation
class NewClass(ParentClass):
Explanation: When a new class is based on an existing class, you write the name of the parent class in parentheses when you define the new class:
End of explanation
###highlight=[5]
class NewClass(ParentClass):
def __init__(self, arguments_new_class, arguments_parent_class):
super().__init__(arguments_parent_class)
# Code for initializing an object of the new class.
Explanation: The __init__() function of the new class needs to call the __init__() function of the parent class. The __init__() function of the new class needs to accept all of the parameters required to build an object from the parent class, and these parameters need to be passed to the __init__() function of the parent class. The super().__init__() function takes care of this:
End of explanation
###highlight=[7]
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
Rocket.__init__(self, x, y)
self.flights_completed = flights_completed
Explanation: The super() function passes the self argument to the parent class automatically. You could also do this by explicitly naming the parent class when you call the __init__() function, but you then have to include the self argument manually:
End of explanation
###highlight=[3, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65]
from math import sqrt
from random import randint
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super().__init__(x, y)
self.flights_completed = flights_completed
# Create several shuttles and rockets, with random positions.
# Shuttles have a random number of flights completed.
shuttles = []
for x in range(0,3):
x = randint(0,100)
y = randint(1,100)
flights_completed = randint(0,10)
shuttles.append(Shuttle(x, y, flights_completed))
rockets = []
for x in range(0,3):
x = randint(0,100)
y = randint(1,100)
rockets.append(Rocket(x, y))
# Show the number of flights completed for each shuttle.
for index, shuttle in enumerate(shuttles):
print("Shuttle %d has completed %d flights." % (index, shuttle.flights_completed))
print("\n")
# Show the distance from the first shuttle to all other shuttles.
first_shuttle = shuttles[0]
for index, shuttle in enumerate(shuttles):
distance = first_shuttle.get_distance(shuttle)
print("The first shuttle is %f units away from shuttle %d." % (distance, index))
print("\n")
# Show the distance from the first shuttle to all other rockets.
for index, rocket in enumerate(rockets):
distance = first_shuttle.get_distance(rocket)
print("The first shuttle is %f units away from rocket %d." % (distance, index))
Explanation: This might seem a little easier to read, but it is preferable to use the super() syntax. When you use super(), you don't need to explicitly name the parent class, so your code is more resilient to later changes. As you learn more about classes, you will be able to write child classes that inherit from multiple parent classes, and the super() function will call the parent classes' __init__() functions for you, in one line. This explicit approach to calling the parent class' __init__() function is included so that you will be less confused if you see it in someone else's code.
The output above shows that a new Shuttle object was created. This new Shuttle object can store the number of flights completed, but it also has all of the functionality of the Rocket class: it has a position that can be changed, and it can calculate the distance between itself and other rockets or shuttles. This can be demonstrated by creating several rockets and shuttles, and then finding the distance between one shuttle and all the other shuttles and rockets. This example uses a simple function called randint, which generates a random integer between a lower and upper bound, to determine the position of each rocket and shuttle:
End of explanation
###highlight=[5]
class NewClass(ParentClass):
def __init__(self, arguments_new_class, arguments_parent_class):
super(NewClass, self).__init__(arguments_parent_class)
# Code for initializing an object of the new class.
Explanation: Inheritance is a powerful feature of object-oriented programming. Using just what you have seen so far about classes, you can model an incredible variety of real-world and virtual phenomena with a high degree of accuracy. The code you write has the potential to be stable and reusable in a variety of applications.
top
Inheritance in Python 2.7
The super() method has a slightly different syntax in Python 2.7:
End of explanation
###highlight=[25,26,27,28,29,30,31,32,33,34]
from math import sqrt
class Rocket(object):
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super(Shuttle, self).__init__(x, y)
self.flights_completed = flights_completed
shuttle = Shuttle(10,0,3)
print(shuttle)
Explanation: Notice that you have to explicitly pass the arguments NewClass and self when you call super() in Python 2.7. The SpaceShuttle class would look like this:
End of explanation
###highlight=[2]
# Save as rocket.py
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
Explanation: This syntax works in Python 3 as well.
top
<a id="Exercises-inheritance"></a>
Exercises
Student Class
Start with your program from Person Class.
Make a new class called Student that inherits from Person.
Define some attributes that a student has, which other people don't have.
A student has a school they are associated with, a graduation year, a gpa, and other particular attributes.
Create a Student object, and prove that you have used inheritance correctly.
Set some attribute values for the student, that are only coded in the Person class.
Set some attribute values for the student, that are only coded in the Student class.
Print the values for all of these attributes.
Refining Shuttle
Take the latest version of the Shuttle class. Extend it.
Add more attributes that are particular to shuttles such as maximum number of flights, capability of supporting spacewalks, and capability of docking with the ISS.
Add one more method to the class, that relates to shuttle behavior. This method could simply print a statement, such as "Docking with the ISS," for a dock_ISS() method.
Prove that your refinements work by creating a Shuttle object with these attributes, and then call your new method.
top
Modules and classes
Now that you are starting to work with classes, your files are going to grow longer. This is good, because it means your programs are probably doing more interesting things. But it is bad, because longer files can be more difficult to work with. Python allows you to save your classes in another file and then import them into the program you are working on. This has the added advantage of isolating your classes into files that can be used in any number of different programs. As you use your classes repeatedly, the classes become more reliable and complete overall.
Storing a single class in a module
When you save a class into a separate file, that file is called a module. You can have any number of classes in a single module. There are a number of ways you can then import the class you are interested in.
Start out by saving just the Rocket class into a file called rocket.py. Notice the naming convention being used here: the module is saved with a lowercase name, and the class starts with an uppercase letter. This convention is pretty important for a number of reasons, and it is a really good idea to follow the convention.
End of explanation
###highlight=[2]
# Save as rocket_game.py
from rocket import Rocket
rocket = Rocket()
print("The rocket is at (%d, %d)." % (rocket.x, rocket.y))
Explanation: Make a separate file called rocket_game.py. If you are more interested in science than games, feel free to call this file something like rocket_simulation.py. Again, to use standard naming conventions, make sure you are using a lowercase_underscore name for this file.
End of explanation
###highlight=[27,28,29,30,31,32,33]
# Save as rocket.py
from math import sqrt
class Rocket():
# Rocket simulates a rocket ship for a game,
# or a physics simulation.
def __init__(self, x=0, y=0):
# Each rocket has an (x,y) position.
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
# Move the rocket according to the paremeters given.
# Default behavior is to move the rocket up one unit.
self.x += x_increment
self.y += y_increment
def get_distance(self, other_rocket):
# Calculates the distance from this rocket to another rocket,
# and returns that value.
distance = sqrt((self.x-other_rocket.x)**2+(self.y-other_rocket.y)**2)
return distance
class Shuttle(Rocket):
# Shuttle simulates a space shuttle, which is really
# just a reusable rocket.
def __init__(self, x=0, y=0, flights_completed=0):
super().__init__(x, y)
self.flights_completed = flights_completed
Explanation: This is a really clean and uncluttered file. A rocket is now something you can define in your programs, without the details of the rocket's implementation cluttering up your file. You don't have to include all the class code for a rocket in each of your files that deals with rockets; the code defining rocket attributes and behavior lives in one file, and can be used anywhere.
The first line tells Python to look for a file called rocket.py. It looks for that file in the same directory as your current program. You can put your classes in other directories, but we will get to that convention a bit later. Notice that you do not
When Python finds the file rocket.py, it looks for a class called Rocket. When it finds that class, it imports that code into the current file, without you ever seeing that code. You are then free to use the class Rocket as you have seen it used in previous examples.
top
Storing multiple classes in a module
A module is simply a file that contains one or more classes or functions, so the Shuttle class actually belongs in the rocket module as well:
End of explanation
###highlight=[3,8,9,10]
# Save as rocket_game.py
from rocket import Rocket, Shuttle
rocket = Rocket()
print("The rocket is at (%d, %d)." % (rocket.x, rocket.y))
shuttle = Shuttle()
print("\nThe shuttle is at (%d, %d)." % (shuttle.x, shuttle.y))
print("The shuttle has completed %d flights." % shuttle.flights_completed)
Explanation: Now you can import the Rocket and the Shuttle class, and use them both in a clean uncluttered program file:
End of explanation
from module_name import ClassName
Explanation: The first line tells Python to import both the Rocket and the Shuttle classes from the rocket module. You don't have to import every class in a module; you can pick and choose the classes you care to use, and Python will only spend time processing those particular classes.
A number of ways to import modules and classes
There are several ways to import modules and classes, and each has its own merits.
import module_name
The syntax for importing classes that was just shown:
End of explanation
# Save as rocket_game.py
import rocket
rocket_0 = rocket.Rocket()
print("The rocket is at (%d, %d)." % (rocket_0.x, rocket_0.y))
shuttle_0 = rocket.Shuttle()
print("\nThe shuttle is at (%d, %d)." % (shuttle_0.x, shuttle_0.y))
print("The shuttle has completed %d flights." % shuttle_0.flights_completed)
Explanation: is straightforward, and is used quite commonly. It allows you to use the class names directly in your program, so you have very clean and readable code. This can be a problem, however, if the names of the classes you are importing conflict with names that have already been used in the program you are working on. This is unlikely to happen in the short programs you have been seeing here, but if you were working on a larger program it is quite possible that the class you want to import from someone else's work would happen to have a name you have already used in your program. In this case, you can use simply import the module itself:
End of explanation
import module_name
Explanation: The general syntax for this kind of import is:
End of explanation
module_name.ClassName
Explanation: After this, classes are accessed using dot notation:
End of explanation
import module_name as local_module_name
Explanation: This prevents some name conflicts. If you were reading carefully however, you might have noticed that the variable name rocket in the previous example had to be changed because it has the same name as the module itself. This is not good, because in a longer program that could mean a lot of renaming.
import module_name as local_module_name
There is another syntax for imports that is quite useful:
End of explanation
# Save as rocket_game.py
import rocket as rocket_module
rocket = rocket_module.Rocket()
print("The rocket is at (%d, %d)." % (rocket.x, rocket.y))
shuttle = rocket_module.Shuttle()
print("\nThe shuttle is at (%d, %d)." % (shuttle.x, shuttle.y))
print("The shuttle has completed %d flights." % shuttle.flights_completed)
Explanation: When you are importing a module into one of your projects, you are free to choose any name you want for the module in your project. So the last example could be rewritten in a way that the variable name rocket would not need to be changed:
End of explanation
import rocket as rocket_module
Explanation: This approach is often used to shorten the name of the module, so you don't have to type a long module name before each class name that you want to use. But it is easy to shorten a name so much that you force people reading your code to scroll to the top of your file and see what the shortened name stands for. In this example,
End of explanation
import rocket as r
Explanation: leads to much more readable code than something like:
End of explanation
from module_name import *
Explanation: from module_name import *
There is one more import syntax that you should be aware of, but you should probably avoid using. This syntax imports all of the available classes and functions in a module:
End of explanation
# Save as multiplying.py
def double(x):
return 2*x
def triple(x):
return 3*x
def quadruple(x):
return 4*x
Explanation: This is not recommended, for a couple reasons. First of all, you may have no idea what all the names of the classes and functions in a module are. If you accidentally give one of your variables the same name as a name from the module, you will have naming conflicts. Also, you may be importing way more code into your program than you need.
If you really need all the functions and classes from a module, just import the module and use the module_name.ClassName syntax in your program.
You will get a sense of how to write your imports as you read more Python code, and as you write and share some of your own code.
top
A module of functions
You can use modules to store a set of functions you want available in different programs as well, even if those functions are not attached to any one class. To do this, you save the functions into a file, and then import that file just as you saw in the last section. Here is a really simple example; save this is multiplying.py:
End of explanation
###highlight=[2]
from multiplying import double, triple, quadruple
print(double(5))
print(triple(5))
print(quadruple(5))
Explanation: Now you can import the file multiplying.py, and use these functions. Using the from module_name import function_name syntax:
End of explanation
###highlight=[2]
import multiplying
print(multiplying.double(5))
print(multiplying.triple(5))
print(multiplying.quadruple(5))
Explanation: Using the import module_name syntax:
End of explanation
###highlight=[2]
import multiplying as m
print(m.double(5))
print(m.triple(5))
print(m.quadruple(5))
Explanation: Using the import module_name as local_module_name syntax:
End of explanation
###highlight=[2]
from multiplying import *
print(double(5))
print(triple(5))
print(quadruple(5))
Explanation: Using the from module_name import * syntax:
End of explanation
# this
import sys
import os
# not this
import sys, os
Explanation: top
<a id="Exercises-importing"></a>
Exercises
Importing Student
Take your program from Student Class
Save your Person and Student classes in a separate file called person.py.
Save the code that uses these classes in four separate files.
In the first file, use the from module_name import ClassName syntax to make your program run.
In the second file, use the import module_name syntax.
In the third file, use the import module_name as different_local_module_name syntax.
In the fourth file, use the import * syntax.
Importing Car
Take your program from Car Class
Save your Car class in a separate file called car.py.
Save the code that uses the car class into four separate files.
In the first file, use the from module_name import ClassName syntax to make your program run.
In the second file, use the import module_name syntax.
In the third file, use the import module_name as different_local_module_name syntax.
In the fourth file, use the import * syntax.
top
Revisiting PEP 8
If you recall, PEP 8 is the style guide for writing Python code. PEP 8 has a little to say about writing classes and using import statements, that was not covered previously. Following these guidelines will help make your code readable to other Python programmers, and it will help you make more sense of the Python code you read.
Import statements
PEP8 provides clear guidelines about where import statements should appear in a file. The names of modules should be on separate lines:
End of explanation
from rocket import Rocket, Shuttle
Explanation: The names of classes can be on the same line:
End of explanation |
11,855 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Init config
Select appropriate
Step1: Count number of tweets per day for every news, calculate cummulative diffusion
Step2: Plot diffusion for every day for all news together
Step3: Plot cummulative diffusion of all news together
Step4: Plot cummulative diffusion for every news headline
Step5: Average diffusion per day for all news
Step6: The same graph but in logrithmic scale
Step7: Calculate and plot standart deviation
Step8: Calculate and share of values inside one standard deviation for every day
Step9: Store average diffusion data on hard drive to use by another jupyter notebook
Step10: Plot average diffusion for both real and fake news on one graph
Step11: In logarithmic scale
Step12: Calculate average diffusion duration (number of days until difussion is dead) | Python Code:
client = pymongo.MongoClient("46.101.236.181")
db = client.allfake
# get collection names
collections = sorted([collection for collection in db.collection_names()])
Explanation: Init config
Select appropriate:
- database server (line 1): give pymongo.MongoClient() an appropriate parameter, else it is localhost
- database (line 2): either client.databasename or client.['databasename']
End of explanation
day = {} # number of tweets per day per collection
diff = {} # cumullative diffusion on day per colletion
for collection in collections:
# timeframe
relevant_from = db[collection].find().sort("timestamp", pymongo.ASCENDING).limit(1)[0]['timestamp']
relevant_till = db[collection].find().sort("timestamp", pymongo.DESCENDING).limit(1)[0]['timestamp']
i = 0
day[collection] = [] # number of tweets for every collection for every day
diff[collection] = [] # cummulative diffusion for every collection for every day
averagediff = [] # average diffusion speed for every day for all news
d = relevant_from
delta = datetime.timedelta(days=1)
while d <= relevant_till:
# tweets per day per collection
day[collection].append(db[collection].find({"timestamp":{"$gte": d, "$lt": d + delta}}).count())
# cummulative diffusion per day per collection
if i == 0:
diff[collection].append( day[collection][i] )
else:
diff[collection].append( diff[collection][i-1] + day[collection][i] )
d += delta
i += 1
Explanation: Count number of tweets per day for every news, calculate cummulative diffusion
End of explanation
# the longest duration of diffusion among all news headlines
max_days = max([len(day[coll]) for coll in \
[days_col for days_col in day] ])
summ_of_diffusions = [0] * max_days # summary diffusion for every day
# calculate summary diffusion for every day
for d in range(max_days):
for c in collections:
# if there is an entry for this day for this collection, add its number of tweets to the number of this day
if d < len(day[c]):
summ_of_diffusions[d] += day[c][d]
plt.step(range(len(summ_of_diffusions)),summ_of_diffusions, 'g')
plt.xlabel('Day')
plt.ylabel('Number of tweets')
plt.title('Diffusion of all fake news together')
plt.show()
Explanation: Plot diffusion for every day for all news together
End of explanation
summ_of_diffusions_cumulative = [0] * max_days
summ_of_diffusions_cumulative[0] = summ_of_diffusions[0]
for d in range(1, max_days):
summ_of_diffusions_cumulative[d] += summ_of_diffusions_cumulative[d-1] + summ_of_diffusions[d]
plt.step(range(len(summ_of_diffusions_cumulative)),summ_of_diffusions_cumulative, 'g')
plt.xlabel('Day')
plt.ylabel('Cummulative number of tweets')
plt.title('Cummulative diffusion of all fake news together')
plt.show()
Explanation: Plot cummulative diffusion of all news together
End of explanation
for collection in collections:
plt.step([d+1 for d in range(len(diff[collection]))], diff[collection])
plt.xlabel('Day')
plt.ylabel('Cumulative tweets number')
plt.title('Cumulative diffusion for fake news headlines')
plt.show()
Explanation: Plot cummulative diffusion for every news headline
End of explanation
averagediff = [0 for _ in range(max_days)] # average diffusion per day
for collection in collections:
for i,d in enumerate(day[collection]):
averagediff[i] += d / len(collections)
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'r')
plt.title('Average diffusion of fake news')
plt.show()
Explanation: Average diffusion per day for all news
End of explanation
plt.yscale('log')
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'r')
plt.show()
# export some data to another notebook
averagediff_fake = averagediff
%store averagediff_fake
Explanation: The same graph but in logrithmic scale
End of explanation
avgdiff_std = [0 for _ in range(max_days)] # standard deviation for every day for all collections
number_tweets = [[] for _ in range(max_days)] # number of tweets for every day for every collection
for d in range(max_days):
for c in collections:
# if there is an entry for this day for this collection
if d < len(day[c]):
# add number of tweets for this day for this colletion to the number_tweets for this day
number_tweets[d].append(day[c][d])
# calculate standard deviation for this day
avgdiff_std[d] = np.std(number_tweets[d])
plt.ylabel('Standart deviation for average number of tweets per day')
plt.xlabel('Day')
plt.step(range(1,len(avgdiff_std)+1),avgdiff_std, 'r')
plt.title('Standard deviation for fake news average')
plt.show()
Explanation: Calculate and plot standart deviation
End of explanation
inside_std = [0 for _ in range(max_days)] # number of values inside one standard deviation for every day
inside_std_share = [0 for _ in range(max_days)] # share of values inside one standard deviation for every day
for d in range(max_days):
for c in collections:
# set borders of mean plusminus one std
lowest = averagediff[d] - avgdiff_std[d]
highest = averagediff[d] + avgdiff_std[d]
# if there is entray for this day for this collection and its value is inside the borderes
if d < len(day[c]) and (day[c][d] >= lowest and day[c][d] <= highest):
# increment number of values inside one std for this day
inside_std[d] += 1
# calculate the share of values inside one std for this day
inside_std_share[d] = inside_std[d] / float(len(number_tweets[d]))
plt.ylabel('Percent of values in 1 std from average')
plt.xlabel('Day')
plt.scatter(range(1,len(inside_std_share)+1),inside_std_share, c='r')
plt.title('Percentage of values inside the range\n of one standard deviation from mean for fake news')
plt.show()
Explanation: Calculate and share of values inside one standard deviation for every day
End of explanation
averagediff_fake = averagediff
%store averagediff_fake
Explanation: Store average diffusion data on hard drive to use by another jupyter notebook
End of explanation
%store -r averagediff_real
plt.xlabel('Day')
plt.ylabel('Average number of tweets')
plt.step(range(1,len(averagediff)+1),averagediff, 'r', label="fake news")
plt.step(range(1,len(averagediff_real)+1),averagediff_real, 'g', label="real news")
plt.legend()
plt.title('Average diffusion for both types of news')
plt.show()
Explanation: Plot average diffusion for both real and fake news on one graph
End of explanation
plt.ylabel('Average number of tweets')
plt.xlabel('Day')
plt.yscale('log')
plt.step(range(1,len(averagediff)+1),averagediff, 'r', label="fake news")
plt.step(range(1,len(averagediff_real)+1),averagediff_real, 'g', label="real news")
plt.legend()
plt.title('Average diffusion for both types of news in logarithmic scale')
plt.show()
Explanation: In logarithmic scale
End of explanation
diffDurationAvg = 0; # average duration of diffusion
durations = [len(day[col]) for col in collections] # all durations
diffDurationAvg = np.mean(durations) # mean duration
diffDurationAvg_std = np.std(durations) # standard deviation for the mean
print "Average diffusion duration: %.2f days" % diffDurationAvg
print "Standard deviation: %.2f days" % diffDurationAvg_std
Explanation: Calculate average diffusion duration (number of days until difussion is dead)
End of explanation |
11,856 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Settings
Step1: Load Jobs Data, Index in Solr Jobs Core
Step2: Load Relevancy Judgements File
Step6: IR Metrics
Step7: Black Box Optimization
Step8: Run Optimizer Algorithm
Step9: The code below runs the sci-kit optimization library and tries to find the set of parameters that minimize the objective function above. We are choosing to map the parameter values to qf values (field boosts), but you can in theory try any configuration setting here that you can test in this way. Some settings, such as changing the config files themselves can be accomplished with a core reload, or in some cases a server restart. Note however that you need the algorithm to run for quite a few iterations to learn effectively from your data, and for some problems, it may not be able to find a near optimal solution.
Step10: The evaluate function below is the same as the objective function, except it tests our newly optimized set of parameters on a different set of queries. This gives a more accurate measure of the performance of the new settings on data points and queries that were not in the training dataset.
Step11: The results from the training here are much higher than the test set. This is typical for a lot of machine learning \ optimization problems. If tuning an existing solr installation, you will want to ensure that the IR metrics score on the test set is better than the current production settings before releasing to production. | Python Code:
#Settings
# The files below are in the root folder of this GitHub repo. Launch jupyter notebook from that folder
# in order to read these files: 'jupyter notebook'
# Note: this is an artificial set of jobs, these are not real jobs, but are representative of our data
# Job descriptions are omitted, but usually we search that field also
jobs_data_file = "jobs.csv"
# File of relevancy judgements - these are highly subjective judgements, please don't take them too seriously
relevancy_file = "relevancy_judegements.csv"
#solr url and core (Jobs)
solr_url = "http://localhost:8983/solr/Jobs"
Explanation: Settings
End of explanation
# Note: You can skip this section if you were able to load the Solr Jobs Core along with the data directory from the
# './Solr Core and Config' sub-folder. Older versions of Solr won't read this data, so here's some code to populate
# the index from the jobs.csv file
jobs_df = pd.read_csv(jobs_data_file, sep=",")
jobs_df["jobSkills"] = jobs_df["jobSkills"].apply(lambda sk: sk.split("|"))
# assign a unique doc id to each row
jobs_df["id"] = range(len(jobs_df))
jobs_df.head(5)
solr_connection = solr.Solr(solr_url, persistent=True, timeout=360, max_retries=5)
# convert dataframe to a list of dictionaries (required solr client library document format)
docs = jobs_df.T.to_dict().values()
#wipe out any existing documents if present
solr_connection.delete_query("*:*")
# send documents
solr_connection.add_many(docs)
# hard commit and optimize
solr_connection.commit()
solr_connection.optimize()
Explanation: Load Jobs Data, Index in Solr Jobs Core
End of explanation
# The 'relevant' column is a list of document id's (the id field from the schema) that were both in the set of the top
# 20 returned documents, and were subjectively judged as relevant to the original
# query. We can subsequently use these to derive a MAP score for a given query
rel_df = pd.read_csv(relevancy_file, sep="|", converters={"fq": str, "location": str})
searches = rel_df.T.to_dict()
rel_df.head(3)
# Takes a search id and a qf setting, and returns the list of doc ids,
def get_results_for_search(sid, qf_value, rows):
search = searches[sid]
fq = ""
pt = "0,0"
if not search["location"].strip() == "" :
splt = filter(lambda s: "pt=" in s, search["fq"].split("&"))
if splt:
pt = splt[0].replace("pt=","")
fq = "{!geofilt}"
resp = solr_connection.select(
q=search["query"],
fields="id",
start=0, rows=rows,
qf=qf_value, # comes from get_solr_params
fq=fq,
sfield="geoCode",
pt=pt,
score=False,
d="48.00", wt="json")
predicted = list(map(lambda res: res["id"], resp.results))
# return predicted doc ids, along with relevent ones (for IR metric)
return predicted, list(map(int, search["relevant"].split(",")))
Explanation: Load Relevancy Judgements File
End of explanation
def apk(actual, predicted, k=10):
Computes the average precision at k.
This function computes the average prescision at k between two lists of
items.
Parameters
----------
actual : set
A set of elements that are to be predicted (order doesn't matter)
predicted : list
A list of predicted elements (order does matter)
k : int, optional
The maximum number of predicted elements
Returns
-------
score : double
The average precision at k over the input lists
if len(predicted)>k:
predicted = predicted[:k]
score = 0.0
num_hits = 0.0
for i,p in enumerate(predicted):
if p in actual and p not in predicted[:i]:
num_hits += 1.0
score += num_hits / (i+1.0)
if not actual:
return 0.0
return score / min(len(actual), k)
def mean_average_precision_at_k(actual, predicted, k=10):
Computes the mean average precision at k.
This function computes the mean average prescision at k between two lists
of lists of items.
Parameters
----------
actual : list
A list of sets of elements that are to be predicted
(order doesn't matter in the lists)
predicted : list
A list of lists of predicted elements
(order matters in the lists)
k : int, optional
The maximum number of predicted elements
Returns
-------
score : double
The mean average precision at k over the input lists
return np.mean([apk(a,p,k) for a,p in zip(actual, predicted)])
def average_ndcg_at_k(actual, predicted, k, method=0):
vals = [ ndcg_at_k(act, pred, k, method) for act, pred in zip(actual, predicted)]
return np.mean(vals)
def ndcg_at_k(actual, predicted, k, method=0):
# convert to ratings - actual relevant results give rating of 10, vs 1 for the rest
act_hash = set(actual)
best_ratings = [ 10 for docid in actual ] + [1 for i in range(0, len(predicted) - len(actual))]
pred_ratings = [ 10 if docid in act_hash else 1 for docid in predicted ]
dcg_max = dcg_at_k(best_ratings, k, method)
if not dcg_max:
return 0.0
dcg = dcg_at_k(pred_ratings, k, method)
return dcg / dcg_max
def dcg_at_k(r, k, method=0):
Code taken from: https://gist.github.com/bwhite/3726239
Score is discounted cumulative gain (dcg)
Relevance is positive real values. Can use binary
as the previous methods.
Example from
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf
>>> r = [3, 2, 3, 0, 0, 1, 2, 2, 3, 0]
>>> dcg_at_k(r, 1)
3.0
>>> dcg_at_k(r, 1, method=1)
3.0
>>> dcg_at_k(r, 2)
5.0
>>> dcg_at_k(r, 2, method=1)
4.2618595071429155
>>> dcg_at_k(r, 10)
9.6051177391888114
>>> dcg_at_k(r, 11)
9.6051177391888114
Args:
r: Relevance scores (list or numpy) in rank order
(first element is the first item)
k: Number of results to consider
method: If 0 then weights are [1.0, 1.0, 0.6309, 0.5, 0.4307, ...]
If 1 then weights are [1.0, 0.6309, 0.5, 0.4307, ...]
Returns:
Discounted cumulative gain
r = np.asfarray(r)[:k]
if r.size:
if method == 0:
return r[0] + np.sum(r[1:] / np.log2(np.arange(2, r.size + 1)))
elif method == 1:
return np.sum(r / np.log2(np.arange(2, r.size + 2)))
else:
raise ValueError('method must be 0 or 1.')
return 0.
# Measure results for one set of qf settings
score = objective([3,1.5,1.1])
score # Score is negative, as scopt tries to minimize function output
Explanation: IR Metrics
End of explanation
# Function takes a list of 12 real numbers, and returns a set of solr configuration options
def get_solr_params(params):
return {"qf" : "employer^{0} jobTitle^{1} jobskills^{2}".format(*params[0:3])
#"pf2" : "employer^{0} jobTitle^{1} jobSkills^{2}".format(*params[3:6]),
#"pf" : "employer^{0} jobTitle^{1} jobSkills^{2}".format(*params[6:9])
}
# spit into training and test set of queries
sids = list(searches.keys())
cutoff = int(0.75* len(sids))
train_sids, test_sids = sids[:cutoff], sids[cutoff:]
train_sids, test_sids
# Precision cut off
PREC_AT = 20
# Black box objective function to minimize
# This is for the training data
def objective(params):
# map list of numbers into solr parameters (just qf in this case)
additional_params = get_solr_params(params)
predicted, actual =[],[]
for sid in train_sids:
pred, act = get_results_for_search(sid, additional_params["qf"], PREC_AT)
predicted.append(pred)
actual.append(act)
# Compute Mean average precision at 20
return -1.0 * mean_average_precision_at_k(actual, predicted, PREC_AT)
# Can also use NDCG - the version above is tailored for binary judegements
#return -1.0 * average_ndcg_at_k(actual, predicted, PREC_AT)
# This is for the test data (held out dataset)
def evaluate(params):
# map list of numbers into solr parameters (just qf in this case)
additional_params = get_solr_params(params)
predicted, actual =[],[]
for sid in test_sids:
pred, act = get_results_for_search(sid, additional_params["qf"], PREC_AT)
predicted.append(pred)
actual.append(act)
# Compute Mean average precision at 20
return -1.0 * mean_average_precision_at_k(actual, predicted, PREC_AT)
Explanation: Black Box Optimization
End of explanation
# Example of how black box function is called to measure value of parameters (qf settings in this case)
score = objective([3, 2.5, 1.5])
# Score is negative as -1 * (IR metric), and the skopt library tries to find the parameters to minimize the score
score
# simple call back function to print progress while optimizing
def callback(res):
call_no = len(res.func_vals)
current_fun = res.func_vals[-1]
print str(call_no).ljust(5) + "\t" + \
str(-1.0* current_fun).ljust(20) + "\t" + str(map(lambda d: round(d,3), res.x_iters[-1]))
Explanation: Run Optimizer Algorithm
End of explanation
from skopt import gbrt_minimize
import datetime
ITERATIONS = 100 # probably want this to be high, 500 calls or more, set to a small value greater than 10 to test it is working
min_val, max_val = 0.0, 50.0
# min and max for each possible qf value (we read 3 in get_solr_params currently)
space = [(min_val, max_val) for i in range(3)]
start = datetime.datetime.now()
print "Starting at ", start
print "Run","\t", "Current MAP", "\t\t", "Parameters"
# run optimizer, which will try to minimize the objective function
res = gbrt_minimize(objective, # the function to minimize
space, # the bounds on each dimension of x
acq="LCB", # controls how it searches for parameters
n_calls=ITERATIONS,# the number of evaluations of f including at x0
random_state=777, # set to a fixed number if you want this to be deterministic
n_jobs=-1, # how many threads (or really python processes due to GIL)
callback=callback)
end = datetime.datetime.now()
Explanation: The code below runs the sci-kit optimization library and tries to find the set of parameters that minimize the objective function above. We are choosing to map the parameter values to qf values (field boosts), but you can in theory try any configuration setting here that you can test in this way. Some settings, such as changing the config files themselves can be accomplished with a core reload, or in some cases a server restart. Note however that you need the algorithm to run for quite a few iterations to learn effectively from your data, and for some problems, it may not be able to find a near optimal solution.
End of explanation
# res.fun - function IR metric score (* -1), res.x - the best performing parameters
test_score = evaluate(res.x)
test_score
Explanation: The evaluate function below is the same as the objective function, except it tests our newly optimized set of parameters on a different set of queries. This gives a more accurate measure of the performance of the new settings on data points and queries that were not in the training dataset.
End of explanation
print("IR Metric @" + str(PREC_AT) + " Training Data = " + str(-1 * res.fun))
print("IR Metric @" + str(PREC_AT) + " Test Data = " + str(-1 * test_score))
print("\nParameters:\n\t"),
print get_solr_params(res.x)["qf"]
print "\ngbrt_minimize took", (end - start).total_seconds(), "secs"
Explanation: The results from the training here are much higher than the test set. This is typical for a lot of machine learning \ optimization problems. If tuning an existing solr installation, you will want to ensure that the IR metrics score on the test set is better than the current production settings before releasing to production.
End of explanation |
11,857 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Halite is an online multiplayer game created by Two Sigma. In the game, four participants command ships to collect an energy source called halite. The player with the most halite at the end of the game wins.
In this tutorial, as part of the Halite competition, you'll write your own intelligent bots to play the game.
Note that the Halite competition is now closed, so we are no longer accepting submissions. That said, you can still use the competition to write your own bots - you just cannot submit bots to the official leaderboard. To see the current list of open competitions, check out the simulations homepage
Step1: The game is played in a 21 by 21 gridworld and lasts 400 timesteps. Each player starts the game with 5,000 halite and one ship.
Grid locations with halite are indicated by a light blue icon, where larger icons indicate more available halite.
<center>
<img src="https
Step2: The line %%writefile submission.py saves the agent to a Python file. Note that all of the code above has to be copied and run in a single cell (please do not split the code into multiple cells).
If the code cell runs successfully, then you'll see a message Writing submission.py (or Overwriting submission.py, if you run it more than once).
Then, copy and run the next code cell in your notebook to play your agent against three random agents. Your agent is in the top left corner of the screen. | Python Code:
#$HIDE_INPUT$
from kaggle_environments import make, evaluate
env = make("halite", debug=True)
env.run(["random", "random", "random", "random"])
env.render(mode="ipython", width=800, height=600)
Explanation: Halite is an online multiplayer game created by Two Sigma. In the game, four participants command ships to collect an energy source called halite. The player with the most halite at the end of the game wins.
In this tutorial, as part of the Halite competition, you'll write your own intelligent bots to play the game.
Note that the Halite competition is now closed, so we are no longer accepting submissions. That said, you can still use the competition to write your own bots - you just cannot submit bots to the official leaderboard. To see the current list of open competitions, check out the simulations homepage: https://www.kaggle.com/simulations.
Part 1: Get started
In this section, you'll learn more about how to play the game.
Game rules
In this section, we'll look more closely at the game rules and explore the different icons on the game board.
For context, we'll look at a game played by four random players. You can use the animation below to view the game in detail: every move is captured and can be replayed.
End of explanation
%%writefile submission.py
# Imports helper functions
from kaggle_environments.envs.halite.helpers import *
# Returns best direction to move from one position (fromPos) to another (toPos)
# Example: If I'm at pos 0 and want to get to pos 55, which direction should I choose?
def getDirTo(fromPos, toPos, size):
fromX, fromY = divmod(fromPos[0],size), divmod(fromPos[1],size)
toX, toY = divmod(toPos[0],size), divmod(toPos[1],size)
if fromY < toY: return ShipAction.NORTH
if fromY > toY: return ShipAction.SOUTH
if fromX < toX: return ShipAction.EAST
if fromX > toX: return ShipAction.WEST
# Directions a ship can move
directions = [ShipAction.NORTH, ShipAction.EAST, ShipAction.SOUTH, ShipAction.WEST]
# Will keep track of whether a ship is collecting halite or carrying cargo to a shipyard
ship_states = {}
# Returns the commands we send to our ships and shipyards
def agent(obs, config):
size = config.size
board = Board(obs, config)
me = board.current_player
# If there are no ships, use first shipyard to spawn a ship.
if len(me.ships) == 0 and len(me.shipyards) > 0:
me.shipyards[0].next_action = ShipyardAction.SPAWN
# If there are no shipyards, convert first ship into shipyard.
if len(me.shipyards) == 0 and len(me.ships) > 0:
me.ships[0].next_action = ShipAction.CONVERT
for ship in me.ships:
if ship.next_action == None:
### Part 1: Set the ship's state
if ship.halite < 200: # If cargo is too low, collect halite
ship_states[ship.id] = "COLLECT"
if ship.halite > 500: # If cargo gets very big, deposit halite
ship_states[ship.id] = "DEPOSIT"
### Part 2: Use the ship's state to select an action
if ship_states[ship.id] == "COLLECT":
# If halite at current location running low,
# move to the adjacent square containing the most halite
if ship.cell.halite < 100:
neighbors = [ship.cell.north.halite, ship.cell.east.halite,
ship.cell.south.halite, ship.cell.west.halite]
best = max(range(len(neighbors)), key=neighbors.__getitem__)
ship.next_action = directions[best]
if ship_states[ship.id] == "DEPOSIT":
# Move towards shipyard to deposit cargo
direction = getDirTo(ship.position, me.shipyards[0].position, size)
if direction: ship.next_action = direction
return me.next_actions
Explanation: The game is played in a 21 by 21 gridworld and lasts 400 timesteps. Each player starts the game with 5,000 halite and one ship.
Grid locations with halite are indicated by a light blue icon, where larger icons indicate more available halite.
<center>
<img src="https://i.imgur.com/3NENMos.png" width=65%><br/>
</center>
Players use ships to navigate the world and collect halite. A ship can only collect halite from its current position. When a ship decides to collect halite, it collects 25% of the halite available in its cell. This collected halite is added to the ship's "cargo".
<center>
<img src="https://i.imgur.com/eKN0kP3.png" width=65%><br/>
</center>
Halite in ship cargo is not counted towards final scores. In order for halite to be counted, ships need to deposit their cargo into a shipyard of the same color. A ship can deposit all of its cargo in a single timestep simply by navigating to a cell containing a shipyard.
<center>
<img src="https://i.imgur.com/LAc6fj8.png" width=65%><br/>
</center>
Players start the game with no shipyards. To get a shipyard, a player must convert a ship into a shipyard, which costs 500 halite. Also, shipyards can spawn (or create) new ships, which deducts 500 halite (per ship) from the player.
Two ships cannot successfully inhabit the same cell. This event results in a collision, where:
- the ship with more halite in its cargo is destroyed, and
- the other ship survives and instantly collects the destroyed ship's cargo.
<center>
<img src="https://i.imgur.com/BuIUPmK.png" width=65%><br/>
</center>
If you view the full game rules, you'll notice that there are more types of collisions that can occur in the game (for instance, ships can collide with enemy shipyards, which destroys the ship, the ship's cargo, and the enemy shipyard).
In general, Halite is a complex game, and we have not covered all of the details here. But even given these simplified rules, you can imagine that a successful player will have to use a relatively complicated strategy.
Game strategy
As mentioned above, a ship has two options at its disposal for collecting halite. It can:
- collect (or mine) halite from its current position.
- collide with an enemy ship containing relatively more halite in its cargo. In this case, the ship destroys the enemy ship and steals its cargo.
Both are illustrated in the figure below. The "cargo" that is tracked in the player's scoreboard contains the total cargo, summed over all of the player's ships.
<center>
<img src="https://i.imgur.com/2DJX6Vt.png" width=75%><br/>
</center>
This raises some questions that you'll have to answer when commanding ships:
- Will your ships focus primarily on locating large halite reserves and mining them efficiently, while mostly ignoring and evading the other players?
- Or, will you look for opportunities to steal halite from other players?
- Alternatively, can you use a combination of those two strategies? If so, what cues will you look for in the game to decide which option is best? For instance, if all enemy ships are far away and your ships are located on cells containing a lot of halite, it makes sense to focus on mining halite. Conversely, if there are many ships nearby with halite to steal (and not too much local halite to collect), it makes sense to attack the enemy ships.
You'll also have to decide how to control your shipyards, and how your ships interact with shipyards. There are three primary actions in the game involving shipyards. You can:
- convert a ship into a shipyard. This is the only way to create a shipyard.
- use a shipyard to create a ship.
- deposit a ship's cargo into a shipyard.
These are illustrated in the image below.
<center>
<img src="https://i.imgur.com/fL5atut.png" width=75%><br/>
</center>
With more ships and shipyards, you can collect halite at a faster rate. But each additional ship and shipyard costs you halite: how will you decide when it might be beneficial to create more?
Part 2: Your first bot
In this section, you'll create your first bot to play the game.
The notebook
The first thing to do is to create a Kaggle notebook where you'll store all of your code.
Begin by navigating to https://www.kaggle.com/notebooks and clicking on "New Notebook".
Next, click on "Create". (Don't change the default settings: so, "Python" should appear under "Select language", and you should have "Notebook" selected under "Select type".)
You now have a notebook where you'll develop your first agent! If you're not sure how to use Kaggle Notebooks, we strongly recommend that you walk through this notebook before proceeding. It teaches you how to run code in the notebook.
Your first agent
It's time to create your first agent! Copy and paste the code in the cell below into your notebook. Then, run the code.
End of explanation
from kaggle_environments import make
env = make("halite", debug=True)
env.run(["submission.py", "random", "random", "random"])
env.render(mode="ipython", width=800, height=600)
Explanation: The line %%writefile submission.py saves the agent to a Python file. Note that all of the code above has to be copied and run in a single cell (please do not split the code into multiple cells).
If the code cell runs successfully, then you'll see a message Writing submission.py (or Overwriting submission.py, if you run it more than once).
Then, copy and run the next code cell in your notebook to play your agent against three random agents. Your agent is in the top left corner of the screen.
End of explanation |
11,858 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<img src="../../images/qiskit-heading.gif" alt="Note
Step1: Lets start by first making two circuits
Step2: The above shows the OpenQASM for both circuits. These can be converted to qobj to run on local simulator backend.
Step3: If you want more information about the circuits to be run, you can set verbose=True
Step4: To get the configuration of a circuit, use
get_compiled_configuration(qobj, 'circuit')
Step5: To get the compiled qasm, use
get_compiled_qasm(qobj,'circuit')
Step6: If we need to change the cx gates so that they work on a device with a restricted coupling graph, we can use the coupling map in the compile command. Here we assume that the device only supports two-qubit gates, with qubit 0 being the control.
Step9: The above circuit, which used three cx gates originally, has a total of five now.
QFT
Here we provide another example, which is the Quantum Fourier transform. These can be loaded directly by using
import qiskit.tools.qi as qi
Step10: Start by creating a quantum circuit on three qubits that prepares an input state, does the QFT, and measures each qubit. The input state is chosen so that the ideal measurement outcome after the QFT is "001". The OpenQASM output is expressed in terms of Hadamard (h), u1(theta)
Step11: If we execute this circuit on the local simulator, we indeed see that the outcome is always "001".
Step12: After calling execute, we can request the "compiled" OpenQASM that was sent to the local simulator. The default behavior is that the circuit is not changed. Looking at the output below, you can see that each gate is expanded according to its definition into gates u1, u2, u3, and cx. There are no further simplifications. For example, the first three gates on q[2] could be combined into a single gate, but they are not.
Step13: Now we will allow QISKit to rewrite the circuit for us. The ibmqx2 backend has subsets of three fully connected qubits. We will get the best results if we use one of these, since there won't be any need to swap.
To get QISKit to rewrite the circuit in this way, we need to provide the "coupling map" and an initial layout. The coupling map below has entries such as "0
Step14: We can see that the chosen layout is the layout we requested. The number of CNOT gates was unchanged, but several single-qubit gates were eliminated. We can confirm this by looking at the "compiled" OpenQASM. Notice that the "cx q[2], q[1];" gate was mapped to "cx q[3], q[4];" instead of "cx q[4], q[3];" because the latter is not in the coupling map. Hadamard gates were inserted to exchange the control and target, and the resulting single-qubit gates were simplified.
Step15: Finally, let's lay out the qubits onto a segment of the ibmqx3 16-qubit device.
Step16: Because the qubits are now on a line, a swap gate is needed to interact the qubits at the endpoints of the line. As you can see, the number of cx gates increases, as does the circuit depth. We can look at the "compiled" OpenQASM to see the additional swap. | Python Code:
# Import the QuantumProgram and our configuration
import math
from pprint import pprint
from qiskit import QuantumProgram
import Qconfig
Explanation: <img src="../../images/qiskit-heading.gif" alt="Note: In order for images to show up in this jupyter notebook you need to select File => Trusted Notebook" width="500 px" align="left">
Compiling and running a quantum program
The latest version of this notebook is available on https://github.com/QISKit/qiskit-tutorial.
Contributors
Andrew Cross and Jay Gambetta
The qubits in the QX devices are arranged in a plane and connected to their neighbors. Because each qubit is not connected to all the others, some circuits cannot execute without rewriting them to use the available interactions. A standard way to do this is to insert "swap" gates, which exchange the states of pairs of qubits, to move distant qubits near one another. QISKit includes methods to do this for you.
Circuit rewriting occurs in QISKit whenever you specify a "coupling map", but by default your circuits are not changed. The coupling map is a Python dictionary whose keys are qubits that can be used as controls, and whose values are lists of possible targets for CNOT gates. In other words, the coupling map represents the qubit layout as an adjacency list for a directed graph.
The compile() method of QuantumProgram currently applies a fixed sequence of passes:
swap_mapper: uses a greedy randomized algorithm to find a swap circuit for each layer of the input circuit
direction_mapper: changes the direction of CNOT gates as needed
cx_cancellation: simplifies adjacent pairs of CNOT gates
optimize_1q_gates: replaces sequences of single-qubit gates by their compositions
Here is an example of this process showing the tools we have provided; we then give a worked example using the quantum Fourier transform (QFT).
End of explanation
qp = QuantumProgram()
# quantum register for the first circuit
q1 = qp.create_quantum_register('q1', 4)
c1 = qp.create_classical_register('c1', 4)
# quantum register for the second circuit
q2 = qp.create_quantum_register('q2', 2)
c2 = qp.create_classical_register('c2', 2)
# making the first circuits
qc1 = qp.create_circuit('GHZ', [q1], [c1])
qc2 = qp.create_circuit('superposition', [q2], [c2])
qc1.h(q1[0])
qc1.cx(q1[0], q1[1])
qc1.cx(q1[1], q1[2])
qc1.cx(q1[2], q1[3])
for i in range(4):
qc1.measure(q1[i], c1[i])
# making the second circuits
qc2.h(q2)
for i in range(2):
qc2.measure(q2[i], c2[i])
# printing the circuits
print(qp.get_qasm('GHZ'))
print(qp.get_qasm('superposition'))
Explanation: Lets start by first making two circuits:
* a GHZ state on four qubits
* a superposition on two qubits
End of explanation
qobj = qp.compile(['GHZ','superposition'], backend='local_qasm_simulator')
qp.get_execution_list(qobj)
Explanation: The above shows the OpenQASM for both circuits. These can be converted to qobj to run on local simulator backend.
End of explanation
qp.get_execution_list(qobj, verbose=True)
Explanation: If you want more information about the circuits to be run, you can set verbose=True
End of explanation
qp.get_compiled_configuration(qobj, 'GHZ', )
Explanation: To get the configuration of a circuit, use
get_compiled_configuration(qobj, 'circuit')
End of explanation
print(qp.get_compiled_qasm(qobj, 'GHZ'))
Explanation: To get the compiled qasm, use
get_compiled_qasm(qobj,'circuit')
End of explanation
# Coupling map
coupling_map = {0: [1, 2, 3]}
# Place the qubits on a triangle in the bow-tie
initial_layout={("q1", 0): ("q", 0), ("q1", 1): ("q", 1), ("q1", 2): ("q", 2), ("q1", 3): ("q", 3)}
qobj = qp.compile(['GHZ'], backend='local_qasm_simulator', coupling_map=coupling_map, initial_layout=initial_layout)
print(qp.get_compiled_qasm(qobj,'GHZ'))
Explanation: If we need to change the cx gates so that they work on a device with a restricted coupling graph, we can use the coupling map in the compile command. Here we assume that the device only supports two-qubit gates, with qubit 0 being the control.
End of explanation
# Define methods for making QFT circuits
def input_state(circ, q, n):
n-qubit input state for QFT that produces output 1.
for j in range(n):
circ.h(q[j])
circ.u1(math.pi/float(2**(j)), q[j]).inverse()
def qft(circ, q, n):
n-qubit QFT on q in circ.
for j in range(n):
for k in range(j):
circ.cu1(math.pi/float(2**(j-k)), q[j], q[k])
circ.h(q[j])
Explanation: The above circuit, which used three cx gates originally, has a total of five now.
QFT
Here we provide another example, which is the Quantum Fourier transform. These can be loaded directly by using
import qiskit.tools.qi as qi
End of explanation
qp = QuantumProgram()
q = qp.create_quantum_register("q", 3)
c = qp.create_classical_register("c", 3)
qft3 = qp.create_circuit("qft3", [q], [c])
input_state(qft3, q, 3)
qft(qft3, q, 3)
for i in range(3):
qft3.measure(q[i], c[i])
print(qft3.qasm())
Explanation: Start by creating a quantum circuit on three qubits that prepares an input state, does the QFT, and measures each qubit. The input state is chosen so that the ideal measurement outcome after the QFT is "001". The OpenQASM output is expressed in terms of Hadamard (h), u1(theta):=diag(1,$e^{i\theta}$), and controlled-u1 (cu1) gates.
End of explanation
result = qp.execute(["qft3"], backend="local_qasm_simulator", shots=1024)
result.get_counts("qft3")
Explanation: If we execute this circuit on the local simulator, we indeed see that the outcome is always "001".
End of explanation
print(result.get_ran_qasm("qft3"))
Explanation: After calling execute, we can request the "compiled" OpenQASM that was sent to the local simulator. The default behavior is that the circuit is not changed. Looking at the output below, you can see that each gate is expanded according to its definition into gates u1, u2, u3, and cx. There are no further simplifications. For example, the first three gates on q[2] could be combined into a single gate, but they are not.
End of explanation
# Coupling map for ibmqx2 "bowtie"
coupling_map = {0: [1, 2],
1: [2],
2: [],
3: [2, 4],
4: [2]}
# Place the qubits on a triangle in the bow-tie
initial_layout={("q", 0): ("q", 2), ("q", 1): ("q", 3), ("q", 2): ("q", 4)}
result2 = qp.execute(["qft3"], backend="local_qasm_simulator", coupling_map=coupling_map, initial_layout=initial_layout)
result2.get_counts("qft3")
Explanation: Now we will allow QISKit to rewrite the circuit for us. The ibmqx2 backend has subsets of three fully connected qubits. We will get the best results if we use one of these, since there won't be any need to swap.
To get QISKit to rewrite the circuit in this way, we need to provide the "coupling map" and an initial layout. The coupling map below has entries such as "0: [1, 2]". This means that it is valid to apply a CNOT gate from q[0] to q[1], and from q[0] to q[2] (where q[0] is the control qubit). The initial layout has entries like "("q", 0): ("q", 2)", which means that we should place q[0] from our input circuit at qubit q[2] on the device. Our choice places the qubits of the QFT circuit onto one of the triangles in the coupling graph.
QISKit will only attempt to rewrite the circuit if coupling_map is not None. The initial_layout is always optional. If one is not given, QISKit will lay out the qubits somewhat arbitrarily, and attempt to adjust the layout so the first layer of gates does not require swapping. Note that the mapper will currently fail and raise an exception if the graph induced by the layout is not connected.
We will run on the local simulator for convenience, but you can change the backend to "ibmqx2" to select the real device.
End of explanation
print(result2.get_ran_qasm("qft3"))
Explanation: We can see that the chosen layout is the layout we requested. The number of CNOT gates was unchanged, but several single-qubit gates were eliminated. We can confirm this by looking at the "compiled" OpenQASM. Notice that the "cx q[2], q[1];" gate was mapped to "cx q[3], q[4];" instead of "cx q[4], q[3];" because the latter is not in the coupling map. Hadamard gates were inserted to exchange the control and target, and the resulting single-qubit gates were simplified.
End of explanation
# Place the qubits on a linear segment of the ibmqx3
coupling_map = {0: [1], 1: [2], 2: [3], 3: [14], 4: [3, 5], 6: [7, 11], 7: [10], 8: [7], 9: [8, 10], 11: [10], 12: [5, 11, 13], 13: [4, 14], 15: [0, 14]}
initial_layout={("q", 0): ("q", 0), ("q", 1): ("q", 1), ("q", 2): ("q", 2)}
result3 = qp.execute(["qft3"], backend="local_qasm_simulator", coupling_map=coupling_map, initial_layout=initial_layout)
result3.get_counts("qft3")
Explanation: Finally, let's lay out the qubits onto a segment of the ibmqx3 16-qubit device.
End of explanation
print(result3.get_ran_qasm("qft3"))
Explanation: Because the qubits are now on a line, a swap gate is needed to interact the qubits at the endpoints of the line. As you can see, the number of cx gates increases, as does the circuit depth. We can look at the "compiled" OpenQASM to see the additional swap.
End of explanation |
11,859 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Iris dataset (Clustering)
Authors
Written by
Step1: Reading Data
Step2: Separate Data
Step3: Scatter Plot Matrix
Step4: K Means (3 clusters) | Python Code:
#Libraries and Imports
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from sklearn import preprocessing
Explanation: Iris dataset (Clustering)
Authors
Written by: Neeraj Asthana (under Professor Robert Brunner)
University of Illinois at Urbana-Champaign
Summer 2016
Acknowledgements
Dataset found on UCI Machine Learning repository at: https://archive.ics.uci.edu/ml/datasets/Iris
Dataset Information
This data set tries to cluster iris species using 4 different continous predcitors.
A description of the dataset can be found at: https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.names
Predictors:
sepal length in cm
sepal width in cm
petal length in cm
petal width in cm
Imports
End of explanation
#Names of all of the columns
names = [
'sep_length'
, 'sep_width'
, 'petal_length'
, 'petal_width'
, 'species'
]
#Import dataset
data = pd.read_csv('iris.data', sep = ',', header = None, names = names)
data.head(10)
data.shape
Explanation: Reading Data
End of explanation
#Select Predictor columns
X = data.ix[:,:-1]
#Scale X so that all columns have the same mean and variance
X_scaled = preprocessing.scale(X)
#Select target column
y = data['species']
y.value_counts()
Explanation: Separate Data
End of explanation
# Visualize dataset with scatterplot matrix
%matplotlib inline
g = sns.PairGrid(data, hue="species")
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter)
Explanation: Scatter Plot Matrix
End of explanation
#train a k-nearest neighbor algorithm
fit = KMeans(n_clusters=3).fit(X_scaled)
fit.labels_
#remake labels so that they properly matchup with the classes
labels = fit.labels_[:]
for index,val in enumerate(labels):
if val == 1:
labels[index] = 1
elif val == 2:
labels[index] = 3
else:
labels[index] = 2
labels
conf_mat = np.zeros((3,3))
true = np.array([0]*50 + [1]*50 + [2]*50)
for i,val in enumerate(true):
conf_mat[val,labels[i]-1] += 1
#true vs. predicted
print(pd.DataFrame(conf_mat))
Explanation: K Means (3 clusters)
End of explanation |
11,860 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Import librairies
Step1: Workspace
Define workspace variable
Step2: After defining the workspace, every defined variables will be automatically updated to the workspace
Define some variables | Python Code:
import numpy as np
from ipywksp import workspace
import pandas as pd
Explanation: Import librairies
End of explanation
workspace(theme='light') # Define a workspace variable
Explanation: Workspace
Define workspace variable
End of explanation
# Define integers and float numbers
a1 = 1
a2 = 54
e = 47.5
Z = 48.025
# Define list and set :
li = [1,2,4,8,7,9,11]
lit = [1.0, 'ok', [14, 17]]
lset = set(li)
# Define matrix and array :
mama = np.random.rand(3, 2)
papa = np.matrix(mama)
brother = np.ravel(papa)
# Define dictionnary, dataframe and series :
dico = {'mama':[45, 32, 45], 'papa':[78, 74, 45], 'son':[7,7,9]}
pdDico = pd.DataFrame(dico)
ser = pdDico['son']
Explanation: After defining the workspace, every defined variables will be automatically updated to the workspace
Define some variables
End of explanation |
11,861 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Landice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Ice Albedo
Is Required
Step7: 1.4. Atmospheric Coupling Variables
Is Required
Step8: 1.5. Oceanic Coupling Variables
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required
Step11: 2.2. Code Version
Is Required
Step12: 2.3. Code Languages
Is Required
Step13: 3. Grid
Land ice grid
3.1. Overview
Is Required
Step14: 3.2. Adaptive Grid
Is Required
Step15: 3.3. Base Resolution
Is Required
Step16: 3.4. Resolution Limit
Is Required
Step17: 3.5. Projection
Is Required
Step18: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required
Step19: 4.2. Description
Is Required
Step20: 4.3. Dynamic Areal Extent
Is Required
Step21: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required
Step22: 5.2. Grounding Line Method
Is Required
Step23: 5.3. Ice Sheet
Is Required
Step24: 5.4. Ice Shelf
Is Required
Step25: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required
Step26: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required
Step27: 7.2. Ocean
Is Required
Step28: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required
Step29: 8.2. Melting
Is Required
Step30: 9. Ice --> Dynamics
**
9.1. Description
Is Required
Step31: 9.2. Approximation
Is Required
Step32: 9.3. Adaptive Timestep
Is Required
Step33: 9.4. Timestep
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'landice')
Explanation: ES-DOC CMIP6 Model Properties - Landice
MIP Era: CMIP6
Institute: DWD
Source ID: MPI-ESM-1-2-HR
Topic: Landice
Sub-Topics: Glaciers, Ice.
Properties: 30 (21 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Grid
4. Glaciers
5. Ice
6. Ice --> Mass Balance
7. Ice --> Mass Balance --> Basal
8. Ice --> Mass Balance --> Frontal
9. Ice --> Dynamics
1. Key Properties
Land ice key properties
1.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Ice Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify how ice albedo is modelled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Atmospheric Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the atmosphere and ice (e.g. orography, ice mass)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Oceanic Coupling Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
Which variables are passed between the ocean and ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which variables are prognostically calculated in the ice model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of land ice code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Grid
Land ice grid
3.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is an adative grid being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Base Resolution
Is Required: TRUE Type: FLOAT Cardinality: 1.1
The base resolution (in metres), before any adaption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Resolution Limit
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If an adaptive grid is being used, what is the limit of the resolution (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.5. Projection
Is Required: TRUE Type: STRING Cardinality: 1.1
The projection of the land ice grid (e.g. albers_equal_area)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Glaciers
Land ice glaciers
4.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of glaciers in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of glaciers, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 4.3. Dynamic Areal Extent
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does the model include a dynamic glacial extent?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Ice
Ice sheet and ice shelf
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the ice sheet and ice shelf in the land ice scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.2. Grounding Line Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the technique used for modelling the grounding line in the ice sheet-ice shelf coupling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.3. Ice Sheet
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice sheets simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Ice Shelf
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are ice shelves simulated?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Ice --> Mass Balance
Description of the surface mass balance treatment
6.1. Surface Mass Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how and where the surface mass balance (SMB) is calulated. Include the temporal coupling frequeny from the atmosphere, whether or not a seperate SMB model is used, and if so details of this model, such as its resolution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Ice --> Mass Balance --> Basal
Description of basal melting
7.1. Bedrock
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over bedrock
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Ocean
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of basal melting over the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Ice --> Mass Balance --> Frontal
Description of claving/melting from the ice shelf front
8.1. Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of calving from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Melting
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the implementation of melting from the front of the ice shelf
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Ice --> Dynamics
**
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description if ice sheet and ice shelf dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Approximation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Approximation type used in modelling ice dynamics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.3. Adaptive Timestep
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there an adaptive time scheme for the ice scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep (in seconds) of the ice scheme. If the timestep is adaptive, then state a representative timestep.
End of explanation |
11,862 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Selecting bins from time, energy
May 2018
Summer on campus. Entire family swimming in the fountain.
I need to take sums over ranges in time and energy. The challenge is that I can't simply provide an input energy, because that may not correspond to a bin edge. I need to select the energies that correspond to time bin edges.
Step1: Look at $t \rightarrow E$ conversion
Look at detector-FC distance distribution
Right now in the time to energy conversion, all of the detectors are using $d=1$ as their distance from the fission chamber to the detector. How accurate is this?
Step2: Let's look at the distribution of detector distances. This is stored in an excel file meas_info > detector_distances.xlsx.
Step3: Look at a distibution of the detector distances.
Step4: Yikes, so my first observation is that the distances are not centered around 100! They are, in fact, all greater than 100. So I think even just changing the default distance in bicorr.convert_time_to_energy would improve things.
I'm going to use pandas.DataFrame.describe to spit out some metrics
Step5: I don't know where this error is coming from, so I will just continue on...
One nice thing about this distribution is that there are not really any "extremes." The
Look at energy values for 1 m. vs. average (1.05522 m)
Look at 25 ns, which is right in the middle of the neutron distribution.
Step6: So the 5.5% change in distance translates to an 11% change in energy for a 25 ns time of flight.
Step7: Again, an 11% change in energy for a 50 ns time of flight. This is equal to 1.05522^2.
Step8: There we go. So whatever the ratio of distances is, the energy calculations will be off by that amount squared.
This really makes me think we need to go back to the original cced files, calculate the energies for each interaction, and then remake the bicorr_hist_master from that. This will be a lot of work.
Consider distribution of energies | Python Code:
import os
import sys
import matplotlib.pyplot as plt
import numpy as np
import imageio
import pandas as pd
import seaborn as sns
sns.set(style='ticks')
sys.path.append('../scripts/')
import bicorr as bicorr
import bicorr_plot as bicorr_plot
%load_ext autoreload
%autoreload 2
Explanation: Selecting bins from time, energy
May 2018
Summer on campus. Entire family swimming in the fountain.
I need to take sums over ranges in time and energy. The challenge is that I can't simply provide an input energy, because that may not correspond to a bin edge. I need to select the energies that correspond to time bin edges.
End of explanation
help(bicorr.convert_energy_to_time)
Explanation: Look at $t \rightarrow E$ conversion
Look at detector-FC distance distribution
Right now in the time to energy conversion, all of the detectors are using $d=1$ as their distance from the fission chamber to the detector. How accurate is this?
End of explanation
os.listdir('../meas_info/')
det_distance_df = pd.read_excel('../meas_info/detector_distances.xlsx')
det_distance_df.head()
Explanation: Let's look at the distribution of detector distances. This is stored in an excel file meas_info > detector_distances.xlsx.
End of explanation
plt.figure(figsize=(4,3))
plt.hist(det_distance_df['Distance (cm)'])
plt.xlabel('Distance (cm)')
plt.ylabel('Number of detectors')
plt.title('Detector-FC distance distribution')
sns.despine(right=False)
plt.show()
Explanation: Look at a distibution of the detector distances.
End of explanation
det_distance_df.describe()['Distance (cm)']
bicorr_plot.histogram_metrics(np.asarray(det_distance_df['Distance (cm)']),'Distance (cm)','Relative number of detectors')
Explanation: Yikes, so my first observation is that the distances are not centered around 100! They are, in fact, all greater than 100. So I think even just changing the default distance in bicorr.convert_time_to_energy would improve things.
I'm going to use pandas.DataFrame.describe to spit out some metrics: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html
End of explanation
t = 25
print(bicorr.convert_time_to_energy(t,distance=1))
print(bicorr.convert_time_to_energy(t,distance=1.05522))
print(bicorr.convert_time_to_energy(t,distance=1.05522)/bicorr.convert_time_to_energy(t,distance=1))
Explanation: I don't know where this error is coming from, so I will just continue on...
One nice thing about this distribution is that there are not really any "extremes." The
Look at energy values for 1 m. vs. average (1.05522 m)
Look at 25 ns, which is right in the middle of the neutron distribution.
End of explanation
t = 50
print(bicorr.convert_time_to_energy(t,distance=1))
print(bicorr.convert_time_to_energy(t,distance=1.05522))
print(bicorr.convert_time_to_energy(t,distance=1.05522)/bicorr.convert_time_to_energy(t,distance=1))
Explanation: So the 5.5% change in distance translates to an 11% change in energy for a 25 ns time of flight.
End of explanation
1.05522**2
Explanation: Again, an 11% change in energy for a 50 ns time of flight. This is equal to 1.05522^2.
End of explanation
det_distance_df.head()
t = 25
energies = [bicorr.convert_time_to_energy(t,distance=dist) for dist in det_distance_df['Distance (cm)']/100]
bicorr_plot.histogram_metrics(energies, xlabel='Energy (MeV)', ylabel='Counts')
pd.DataFrame([bicorr.convert_time_to_energy(t,distance=dist) for dist in det_distance_df['Distance (cm)']/100]).describe()
t = 50
energies = [bicorr.convert_time_to_energy(t,distance=dist) for dist in det_distance_df['Distance (cm)']/100]
bicorr_plot.histogram_metrics(energies, xlabel='Energy (MeV)', ylabel='Counts')
pd.DataFrame([bicorr.convert_time_to_energy(t,distance=dist) for dist in det_distance_df['Distance (cm)']/100]).describe()
dt_bin_edges = np.arange(0,200,2)
print(dt_bin_edges)
energy_bin_edges = np.asarray(np.insert([bicorr.convert_time_to_energy(t) for t in dt_bin_edges[1:]],0,10000))
print(energy_bin_edges)
plt.figure(figsize=(4,3))
plt.plot(dt_bin_edges,energy_bin_edges,'.-k',linewidth=.5)
plt.axvline(15,color='r')
plt.axvline(150,color='r')
plt.yscale('log')
plt.xlabel('time bin edge (ns)')
plt.ylabel('energy bin edge (MeV)')
sns.despine(right=False)
plt.show()
Explanation: There we go. So whatever the ratio of distances is, the energy calculations will be off by that amount squared.
This really makes me think we need to go back to the original cced files, calculate the energies for each interaction, and then remake the bicorr_hist_master from that. This will be a lot of work.
Consider distribution of energies
End of explanation |
11,863 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2019 The TensorFlow Authors.
Step1: TensorFlow Lite による芸術的スタイル転送
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: コンテンツ画像とスタイル画像、および事前トレーニング済みの TensorFlow Lite モデルをダウンロードします。
Step3: 入力を前処理する
コンテンツ画像とスタイル画像は RGB 画像である必要があります。ピクセル値は [0..1] 間の float32 の数値です。
スタイル画像のサイズは (1, 256, 256, 3) である必要があります。画像を中央でクロップしてサイズを変更します。
コンテンツ画像は (1, 384, 384, 3) である必要があります。画像を中央でクロップしてサイズを変更します。
Step4: 入力を可視化する
Step5: TensorFlow Lite でスタイル転送を実行する
スタイルを予測する
Step6: スタイルを変換する
Step7: スタイルをブレンドする
コンテンツ画像のスタイルをスタイル化された出力にブレンドさせることができます。こうすると、出力がよりコンテンツ画像のように見えるようになります。 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
print(tf.__version__)
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import time
import functools
Explanation: TensorFlow Lite による芸術的スタイル転送
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/lite/examples/style_transfer/overview"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/lite/examples/style_transfer/overview.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/lite/examples/style_transfer/overview.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/examples/style_transfer/overview.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
<td>
<a href="https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/2"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />See TF Hub model</a>
</td>
</table>
最近開発されたディープラーニングの中で最も面白い開発の 1 つとして、芸術的スタイル転送または パスティーシュ(模倣)として知られる能力があります。これは芸術的スタイルを表現する画像とコンテンツを表現する画像から成る 2 つの入力画像に基づいて新しい画像を創造するものです。
この手法を使用すると、様々なスタイルの美しく新しい作品を生成することができます。
TensorFlow Lite を初めて使用する場合、Android を使用する場合は、以下のサンプルアプリをご覧ください。
<a class="button button-primary" href="https://github.com/tensorflow/examples/tree/master/lite/examples/style_transfer/android">Android の例</a>
Android や iOS 以外のプラットフォームを使用する場合、または、すでに <a href="https://www.tensorflow.org/api_docs/python/tf/lite">TensorFlow Lite API</a> に精通している場合は、このチュートリアルに従い、事前トレーニング済みの TensorFlow Lite モデル を使用して、任意のコンテンツ画像とスタイル画像のペアにスタイル転送を適用する方法を学ぶことができます。モデルを使用して、独自のモバイルアプリにスタイル転送を追加することができます。
モデルは GitHub でオープンソース化されています。異なるパラメータを使用してモデルの再トレーニング(例えば、コンテンツレイヤーの重みを増やしてよりコンテンツ画像に近い出力画像にするなど)が可能です。
モデルアーキテクチャの理解
この芸術的スタイル転送モデルは、2 つのサブモデルで構成されています。
スタイル予測モデル: 入力スタイル画像を 100 次元スタイルのボトルネックベクトルに変換する MobilenetV2 ベースのニューラルネットワーク。
スタイル変換モデル: コンテンツ画像にスタイルのボトルネックベクトルを適用し、スタイル化された画像を生成するニューラルネットワーク。
アプリが特定のスタイル画像セットのみをサポートする必要がある場合は、それらのスタイルのボトルネックベクトルを事前に計算して、そのスタイル予測モデルをアプリのバイナリから除外します。
セットアップ
依存関係をインポートします。
End of explanation
content_path = tf.keras.utils.get_file('belfry.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/belfry-2611573_1280.jpg')
style_path = tf.keras.utils.get_file('style23.jpg','https://storage.googleapis.com/khanhlvg-public.appspot.com/arbitrary-style-transfer/style23.jpg')
style_predict_path = tf.keras.utils.get_file('style_predict.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/prediction/1?lite-format=tflite')
style_transform_path = tf.keras.utils.get_file('style_transform.tflite', 'https://tfhub.dev/google/lite-model/magenta/arbitrary-image-stylization-v1-256/int8/transfer/1?lite-format=tflite')
Explanation: コンテンツ画像とスタイル画像、および事前トレーニング済みの TensorFlow Lite モデルをダウンロードします。
End of explanation
# Function to load an image from a file, and add a batch dimension.
def load_img(path_to_img):
img = tf.io.read_file(path_to_img)
img = tf.io.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
img = img[tf.newaxis, :]
return img
# Function to pre-process by resizing an central cropping it.
def preprocess_image(image, target_dim):
# Resize the image so that the shorter dimension becomes 256px.
shape = tf.cast(tf.shape(image)[1:-1], tf.float32)
short_dim = min(shape)
scale = target_dim / short_dim
new_shape = tf.cast(shape * scale, tf.int32)
image = tf.image.resize(image, new_shape)
# Central crop the image.
image = tf.image.resize_with_crop_or_pad(image, target_dim, target_dim)
return image
# Load the input images.
content_image = load_img(content_path)
style_image = load_img(style_path)
# Preprocess the input images.
preprocessed_content_image = preprocess_image(content_image, 384)
preprocessed_style_image = preprocess_image(style_image, 256)
print('Style Image Shape:', preprocessed_style_image.shape)
print('Content Image Shape:', preprocessed_content_image.shape)
Explanation: 入力を前処理する
コンテンツ画像とスタイル画像は RGB 画像である必要があります。ピクセル値は [0..1] 間の float32 の数値です。
スタイル画像のサイズは (1, 256, 256, 3) である必要があります。画像を中央でクロップしてサイズを変更します。
コンテンツ画像は (1, 384, 384, 3) である必要があります。画像を中央でクロップしてサイズを変更します。
End of explanation
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
plt.subplot(1, 2, 1)
imshow(preprocessed_content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(preprocessed_style_image, 'Style Image')
Explanation: 入力を可視化する
End of explanation
# Function to run style prediction on preprocessed style image.
def run_style_predict(preprocessed_style_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_predict_path)
# Set model input.
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
interpreter.set_tensor(input_details[0]["index"], preprocessed_style_image)
# Calculate style bottleneck.
interpreter.invoke()
style_bottleneck = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return style_bottleneck
# Calculate style bottleneck for the preprocessed style image.
style_bottleneck = run_style_predict(preprocessed_style_image)
print('Style Bottleneck Shape:', style_bottleneck.shape)
Explanation: TensorFlow Lite でスタイル転送を実行する
スタイルを予測する
End of explanation
# Run style transform on preprocessed style image
def run_style_transform(style_bottleneck, preprocessed_content_image):
# Load the model.
interpreter = tf.lite.Interpreter(model_path=style_transform_path)
# Set model input.
input_details = interpreter.get_input_details()
interpreter.allocate_tensors()
# Set model inputs.
interpreter.set_tensor(input_details[0]["index"], preprocessed_content_image)
interpreter.set_tensor(input_details[1]["index"], style_bottleneck)
interpreter.invoke()
# Transform content image.
stylized_image = interpreter.tensor(
interpreter.get_output_details()[0]["index"]
)()
return stylized_image
# Stylize the content image using the style bottleneck.
stylized_image = run_style_transform(style_bottleneck, preprocessed_content_image)
# Visualize the output.
imshow(stylized_image, 'Stylized Image')
Explanation: スタイルを変換する
End of explanation
# Calculate style bottleneck of the content image.
style_bottleneck_content = run_style_predict(
preprocess_image(content_image, 256)
)
# Define content blending ratio between [0..1].
# 0.0: 0% style extracts from content image.
# 1.0: 100% style extracted from content image.
content_blending_ratio = 0.5 #@param {type:"slider", min:0, max:1, step:0.01}
# Blend the style bottleneck of style image and content image
style_bottleneck_blended = content_blending_ratio * style_bottleneck_content \
+ (1 - content_blending_ratio) * style_bottleneck
# Stylize the content image using the style bottleneck.
stylized_image_blended = run_style_transform(style_bottleneck_blended,
preprocessed_content_image)
# Visualize the output.
imshow(stylized_image_blended, 'Blended Stylized Image')
Explanation: スタイルをブレンドする
コンテンツ画像のスタイルをスタイル化された出力にブレンドさせることができます。こうすると、出力がよりコンテンツ画像のように見えるようになります。
End of explanation |
11,864 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Testing Centrality
Step1: We are using a synthetic example to explore shortest paths. The figure above shows a simple raster which
can be thought of as two regions separated by two 'hills' and a valley in between them.
Three paths from one side to another, marked in red, white and black are calculated. As you can observe,
all of them pass through a certain region in the valley. We want to see if this region can be highlighted somehow.
For this we use Betweenness centrality. | Python Code:
%matplotlib inline
# %load testShortestPath2.py
from rasterGraphCreate import *
import matplotlib
from matplotlib import pyplot
if __name__ == "__main__":
arraySize = 100
startNode = 4
destNode = 8080
startNode1 = 90
destNode1 = 8008
startNode2 = 30
destNode2 = 9080
raster = np.zeros((arraySize,arraySize))
measureX = np.linspace(0,5, arraySize)
measureY = measureX.copy()
kx,ky = np.meshgrid(measureX, measureY)
raster += np.exp(-pow(kx-0.5, 2)/.5 - pow(ky-2.5,2)/2.)
raster += np.exp(-pow(kx-2.5, 2)/.5 - pow(ky-2.5,2)/2.)
raster += np.exp(-pow(kx-4.5, 2)/.5 - pow(ky-2.5,2)/2.)
#raster += np.exp(-pow(measureX-0.5, 2)/5. - pow(measureY-0.7,2)/5.)
raster += 0.1*np.random.random(raster.shape)
pyplot.ion()
pyplot.imshow(raster.transpose(),cmap=pyplot.cm.coolwarm)
def weightFn(arr, v1index, v2index):
return (raster[v1index] + raster[v2index])/2.
g = createGraph(raster, weightFunction=weightFn)
vlist, elist = shortest_path(g, g.vertex(startNode), g.vertex(destNode), weights=g.ep.edgeCost)
print g.vertex(4)
print [vert for vert in g.vertex(destNode).out_neighbours()]
xs = []
ys = []
for vertex in vlist:
index = g.vertex_index[vertex]
row,col = getRowCol(index, arraySize)
xs.append(row)
ys.append(col)
pyplot.hold(True)
pyplot.plot(xs, ys,'white',linewidth=2)
vlist, elist = shortest_path(g, g.vertex(startNode1), g.vertex(destNode1), weights=g.ep.edgeCost)
print g.vertex(4)
print [vert for vert in g.vertex(destNode).out_neighbours()]
xs = []
ys = []
for vertex in vlist:
index = g.vertex_index[vertex]
row,col = getRowCol(index, arraySize)
xs.append(row)
ys.append(col)
pyplot.hold(True)
pyplot.plot(xs, ys,'red',linewidth=2)
vlist, elist = shortest_path(g, g.vertex(startNode2), g.vertex(destNode2), weights=g.ep.edgeCost)
xs = []
ys = []
for vertex in vlist:
index = g.vertex_index[vertex]
row,col = getRowCol(index, arraySize)
xs.append(row)
ys.append(col)
pyplot.hold(True)
pyplot.plot(xs, ys,'k',linewidth=2)
pyplot.xlim(-1,arraySize)
pyplot.ylim(0,arraySize)
pyplot.show()
Explanation: Testing Centrality
End of explanation
import graph_tool.centrality as centrality
#Calculate centrality
vp, ep = centrality.betweenness(g, weight = g.ep['edgeCost'])
from pylab import *
output = createRaster(g, vp, arraySize, arraySize)
imshow(output.transpose(), cmap= cm.coolwarm)
colorbar()
Explanation: We are using a synthetic example to explore shortest paths. The figure above shows a simple raster which
can be thought of as two regions separated by two 'hills' and a valley in between them.
Three paths from one side to another, marked in red, white and black are calculated. As you can observe,
all of them pass through a certain region in the valley. We want to see if this region can be highlighted somehow.
For this we use Betweenness centrality.
End of explanation |
11,865 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-3', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-3
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation |
11,866 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Índice
Exclusão competitiva
Coexistência em equilíbrio
Step1: <div id="2">2. Coexistência em equilíbrio
Step2: <div id="3">3. Não-linearidade relativa</div>
Agora vamos voltar ao modelo da seção 1, porém com uma modificação | Python Code:
%matplotlib notebook
from numpy import *
from scipy.integrate import odeint
from matplotlib.pyplot import *
ion()
def consumer_resource1(y, t, r, K, b1, m1, b2, m2):
return array([ y[0] * (r*(1-y[0]/K) - b1*y[1] - b2*y[2]),
y[1] * (b1*y[0] - m1),
y[2] * (b2*y[0] - m2)])
t = arange(0, 200, .1)
y0 = [1, 1., 1.]
pars = (1., 1., 1., 0.1, 0.5, 0.1)
y = odeint(consumer_resource1, y0, t, pars)
plot(t, y)
xlabel('tempo')
ylabel('populações')
legend(['$R$', '$C_1$', '$C_2$'])
x = arange(0, 0.8, 0.01)
plot(x, 1.*x)
plot(x, 0.5*x)
legend(['$C_1$', '$C_2$'], frameon=False, loc='best')
axhline(0.1, c='k', ls=':')
xlabel('$R$')
ylabel('resposta funcional')
text(0.07, 0.16, '$R^*_1$')
text(0.2, 0.05, '$R^*_2$')
text(0.7, 0.11, '$d$')
Explanation: Índice
Exclusão competitiva
Coexistência em equilíbrio: nichos
Não-linearidade relativa
Storage effect
<div id="1"> 1. Exclusão competitiva </div>
$$ \begin{aligned}
\frac{1}{C_1}\frac{dC_1}{dt} &= f_1(R) - m_1\
\frac{1}{C_2}\frac{dC_2}{dt} &= f_2(R) - m_2\
\frac{dR}{dt} &= rR\left(1-\frac{R}{K}\right) - f_1(R)C_1 - f_2(R)C_2
\end{aligned} $$
Se as respostas funcionais $f_1$ e $f_2$ forem lineares (ou seja, $f_i = b_i R$), os nichos coincidem e não há coexistência possível. O caso linear corresponde à famosa regra do $R^*$ de Tilman:
$$ R^*_i = \frac{m_i}{b_i} ~,$$
onde $R^_i$ é o valor de equilíbrio do recurso na presença do consumidor $i$ (chega-se a isso procurando pelo ponto de equilíbrio da equação diferencial de $C_i$: $dC_i/dt = 0$). Quando $R < R^_i$, a taxa de crescimento de $C_i$ é negativa e ela decresce, portanto a espécie com menor $R^*$ é a melhor competidora.
End of explanation
def consumer_resource2(y, t, r, K, b1, m1, b2, m2):
return array([ y[0] * (r*(1-y[0]/K) - b1*y[1] - b2*y[2]),
y[1] * (b1*y[0] - m1) - 0.2*y[1]**2,
y[2] * (b2*y[0] - m2)])
pars = (1., 1., 1., 0.1, 0.5, 0.1)
y = odeint(consumer_resource2, y0, t, pars)
plot(t, y)
xlabel('temop')
ylabel('populações')
legend(['$R$', '$C_1$', '$C_2$'])
Explanation: <div id="2">2. Coexistência em equilíbrio: nichos</div>
Para existir coexistência em equilíbrio é necessário que haja diferença de nicho entre as espécies, o que requer a introdução de novos ingredientes (e.g. recursos). Em geral, para que $n$ espécies coexistam (novamente, em equilíbrio!) são necessários no mínimo $n$ recursos.
Aqui vamos tomar um exemplo simples e assumir que exista um segundo recurso implícito (ou seja, não modelado diretamente) que limita o crescimento da melhor competidora.
TODO: explicar sobreposição de nicho baseado em modelo com 2 recursos, com vetores de consumo, estilo Chesson 1990, e relacionar isso a fatores estabilizadores/equalizadores.
End of explanation
def consumer_resource3(y, t, r, K, b1, m1, h1, b2, m2):
return array([ y[0] * (r*(1-y[0]/K) - b1*y[1]/(1+b1*h1*y[0]) - b2*y[2]),
y[1] * (b1*y[0]/(1+b1*h1*y[0]) - m1),
y[2] * (b2*y[0] - m2)])
t = arange(0, 400, .1)
# note que os outros parâmetros não foram alterados!
pars = (1., 1., 1., 0.1, 3., 0.5, 0.1)
y = odeint(consumer_resource3, y0, t, pars)
plot(t, y)
xlabel('tempo')
ylabel('populações')
legend(['$R$', '$C_1$', '$C_2$'], loc='upper left')
print('média de R (últimos T-200): %.2f' % y[-2000:,0].mean())
x = arange(0, 0.8, 0.01)
plot(x, 1.*x/(1+3*x), 'g')
plot(x, 0.5*x, 'r')
legend(['$C_1$', '$C_2$'], frameon=False, loc='best')
axhline(0.1, c='k', ls=':')
xlabel('$R$')
ylabel('resposta funcional')
text(0.1, 0.12, '$R^*_1$')
text(0.2, 0.05, '$R^*_2$')
text(0.7, 0.11, '$d$')
plot([0.01, 0.6], 2*[0.02], '.-b')
text(0.4, 0.04, "amplitude de\nvalores de $R$")
Explanation: <div id="3">3. Não-linearidade relativa</div>
Agora vamos voltar ao modelo da seção 1, porém com uma modificação: faremos com que a resposta funcional da espécie 1 (a superior) seja não-linear, assumindo uma forma funcional de Holling tipo II:
$$ f_1(R) = \frac{b_1 R}{1+b_1 h R} ~,$$
em que $h$ é o chamado tempo de manipulação (handling time). Essa alteração tem 2 efeitos:
* dependendo do valor de $h$, é possível que o sistema de $R$ e $C_1$ (na ausência da espécie 2) exiba oscilações sustentadas - este modelo é conhecido exatamente por isto, e é chamado de modelo de Rosenzweig-MacArthur.
* para valores de $R$ grandes, a taxa de crescimento de $C_2$ pode superar a de $C_1$ (o que era impossível no modelo linear!), mesmo com $C_1$ ainda tendo o menor $R^*$ (ver gráfico abaixo).
A chamada "não-linearidade relativa" depende exatamente desse segundo fato: embora $$m_1 = m_2 = f_1(R^) > f_2(R^)$$ (no equilíbrio, a espécie 1 é superior, portanto $f_2 < m_2$ e a espécie 2 não invade), na presença de flutuações de $R(t)$, é possível que $$ \langle f_2(R(t)) \rangle > m_2 ~.$$ Isto se dá porque, no equilíbrio sem a espécie 2, $\langle f_1(R(t)) \rangle = m_1 (=m_2)$, mas pela não-linearidade, $\langle f_1(R(t)) \rangle < f_1(\langle R(t)\rangle)$, então $R(t)$ flutua ao redor de valores maiores que $R^$. Se $\langle R\rangle \geq R^_2$, então o consumidor 2 é capaz de invadir!
Este exemplo é bonitinho porque as oscilações não dependem de nenhum fator externo. O ponto principal, porém, é que as oscilações permitem que a espécie inferior tenha em média taxa de crescimento positivo mesmo que, no equilíbrio, ela tivesse crescimento negativo (extinção). A origem das oscilações poderia ser qualquer outro, como sazonalidade, perturbações ambientais, acoplamento com outras espécies etc.
End of explanation |
11,867 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Parallelization
Step1: With emcee, it's easy to make use of multiple CPUs to speed up slow sampling.
There will always be some computational overhead introduced by parallelization so it will only be beneficial in the case where the model is expensive, but this is often true for real research problems.
All parallelization techniques are accessed using the pool keyword argument in the
Step2: In all of the following examples, we'll test the code with the following convoluted model
Step3: This probability function will randomly sleep for a fraction of a second every time it is called.
This is meant to emulate a more realistic situation where the model is computationally expensive to compute.
To start, let's sample the usual (serial) way
Step4: Multiprocessing
The simplest method of parallelizing emcee is to use the multiprocessing module from the standard library.
To parallelize the above sampling, you could update the code as follows
Step5: I have 4 cores on the machine where this is being tested
Step7: We don't quite get the factor of 4 runtime decrease that you might expect because there is some overhead in the parallelization, but we're getting pretty close with this example and this will get even closer for more expensive models.
MPI
Multiprocessing can only be used for distributing calculations across processors on one machine.
If you want to take advantage of a bigger cluster, you'll need to use MPI.
In that case, you need to execute the code using the mpiexec executable, so this demo is slightly more convoluted.
For this example, we'll write the code to a file called script.py and then execute it using MPI, but when you really use the MPI pool, you'll probably just want to edit the script directly.
To run this example, you'll first need to install the schwimmbad library because emcee no longer includes its own MPIPool.
Step8: There is often more overhead introduced by MPI than multiprocessing so we get less of a gain this time.
That being said, MPI is much more flexible and it can be used to scale to huge systems.
Pickling, data transfer & arguments
All parallel Python implementations work by spinning up multiple python processes with identical environments then and passing information between the processes using pickle.
This means that the probability function must be picklable.
Some users might hit issues when they use args to pass data to their model.
These args must be pickled and passed every time the model is called.
This can be a problem if you have a large dataset, as you can see here
Step9: We basically get no change in performance when we include the data argument here.
Now let's try including this naively using multiprocessing
Step10: Brutal.
We can do better than that though.
It's a bit ugly, but if we just make data a global variable and use that variable within the model calculation, then we take no hit at all. | Python Code:
import os
os.environ["OMP_NUM_THREADS"] = "1"
Explanation: Parallelization
End of explanation
import emcee
print(emcee.__version__)
Explanation: With emcee, it's easy to make use of multiple CPUs to speed up slow sampling.
There will always be some computational overhead introduced by parallelization so it will only be beneficial in the case where the model is expensive, but this is often true for real research problems.
All parallelization techniques are accessed using the pool keyword argument in the :class:EnsembleSampler class but, depending on your system and your model, there are a few pool options that you can choose from.
In general, a pool is any Python object with a map method that can be used to apply a function to a list of numpy arrays.
Below, we will discuss a few options.
This tutorial was executed with the following version of emcee:
End of explanation
import time
import numpy as np
def log_prob(theta):
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
Explanation: In all of the following examples, we'll test the code with the following convoluted model:
End of explanation
np.random.seed(42)
initial = np.random.randn(32, 5)
nwalkers, ndim = initial.shape
nsteps = 100
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob)
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
serial_time = end - start
print("Serial took {0:.1f} seconds".format(serial_time))
Explanation: This probability function will randomly sleep for a fraction of a second every time it is called.
This is meant to emulate a more realistic situation where the model is computationally expensive to compute.
To start, let's sample the usual (serial) way:
End of explanation
from multiprocessing import Pool
with Pool() as pool:
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, pool=pool)
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
multi_time = end - start
print("Multiprocessing took {0:.1f} seconds".format(multi_time))
print("{0:.1f} times faster than serial".format(serial_time / multi_time))
Explanation: Multiprocessing
The simplest method of parallelizing emcee is to use the multiprocessing module from the standard library.
To parallelize the above sampling, you could update the code as follows:
End of explanation
from multiprocessing import cpu_count
ncpu = cpu_count()
print("{0} CPUs".format(ncpu))
Explanation: I have 4 cores on the machine where this is being tested:
End of explanation
with open("script.py", "w") as f:
f.write(
import sys
import time
import emcee
import numpy as np
from schwimmbad import MPIPool
def log_prob(theta):
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
with MPIPool() as pool:
if not pool.is_master():
pool.wait()
sys.exit(0)
np.random.seed(42)
initial = np.random.randn(32, 5)
nwalkers, ndim = initial.shape
nsteps = 100
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob, pool=pool)
start = time.time()
sampler.run_mcmc(initial, nsteps)
end = time.time()
print(end - start)
)
mpi_time = !mpiexec -n {ncpu} python script.py
mpi_time = float(mpi_time[0])
print("MPI took {0:.1f} seconds".format(mpi_time))
print("{0:.1f} times faster than serial".format(serial_time / mpi_time))
Explanation: We don't quite get the factor of 4 runtime decrease that you might expect because there is some overhead in the parallelization, but we're getting pretty close with this example and this will get even closer for more expensive models.
MPI
Multiprocessing can only be used for distributing calculations across processors on one machine.
If you want to take advantage of a bigger cluster, you'll need to use MPI.
In that case, you need to execute the code using the mpiexec executable, so this demo is slightly more convoluted.
For this example, we'll write the code to a file called script.py and then execute it using MPI, but when you really use the MPI pool, you'll probably just want to edit the script directly.
To run this example, you'll first need to install the schwimmbad library because emcee no longer includes its own MPIPool.
End of explanation
def log_prob_data(theta, data):
a = data[0] # Use the data somehow...
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
data = np.random.randn(5000, 200)
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data, args=(data,))
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
serial_data_time = end - start
print("Serial took {0:.1f} seconds".format(serial_data_time))
Explanation: There is often more overhead introduced by MPI than multiprocessing so we get less of a gain this time.
That being said, MPI is much more flexible and it can be used to scale to huge systems.
Pickling, data transfer & arguments
All parallel Python implementations work by spinning up multiple python processes with identical environments then and passing information between the processes using pickle.
This means that the probability function must be picklable.
Some users might hit issues when they use args to pass data to their model.
These args must be pickled and passed every time the model is called.
This can be a problem if you have a large dataset, as you can see here:
End of explanation
with Pool() as pool:
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data, pool=pool, args=(data,))
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
multi_data_time = end - start
print("Multiprocessing took {0:.1f} seconds".format(multi_data_time))
print("{0:.1f} times faster(?) than serial".format(serial_data_time / multi_data_time))
Explanation: We basically get no change in performance when we include the data argument here.
Now let's try including this naively using multiprocessing:
End of explanation
def log_prob_data_global(theta):
a = data[0] # Use the data somehow...
t = time.time() + np.random.uniform(0.005, 0.008)
while True:
if time.time() >= t:
break
return -0.5*np.sum(theta**2)
with Pool() as pool:
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_prob_data_global, pool=pool)
start = time.time()
sampler.run_mcmc(initial, nsteps, progress=True)
end = time.time()
multi_data_global_time = end - start
print("Multiprocessing took {0:.1f} seconds".format(multi_data_global_time))
print("{0:.1f} times faster than serial".format(serial_data_time / multi_data_global_time))
Explanation: Brutal.
We can do better than that though.
It's a bit ugly, but if we just make data a global variable and use that variable within the model calculation, then we take no hit at all.
End of explanation |
11,868 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Provide config file
Step1: Fires in Nepal
http
Step2: Further subset using bounding box | Python Code:
cmr=CMR("../cmr.cfg")
Explanation: Provide config file
End of explanation
results = cmr.searchGranule(entry_title='MODIS/Aqua Near Real Time (NRT) Thermal Anomalies/Fire 5-Min L2 Swath 1km (C005)',
temporal="2016-04-11T12:00:00Z,2016-04-11T13:00:00Z")
for res in results:
print(res.getDownloadUrl())
Explanation: Fires in Nepal
http://earthobservatory.nasa.gov/NaturalHazards/view.php?id=87854&eocn=home&eoci=nh
Search for granule using dataset name and datetime
End of explanation
results = cmr.searchGranule(entry_title='MODIS/Aqua Near Real Time (NRT) Thermal Anomalies/Fire 5-Min L2 Swath 1km (C005)',
temporal="2016-04-11T12:00:00Z,2016-04-11T13:00:00Z", bounding_box="30,26,90,31")
for res in results:
print(res.getDownloadUrl())
Explanation: Further subset using bounding box
End of explanation |
11,869 | Given the following text description, write Python code to implement the functionality described.
Description:
Minimum cost to build N blocks from one block
Function to calculate min cost to build N blocks ; Initialize base case ; Recurence when i is odd ; Recurence when i is even ; Driver code
| Python Code:
def minCost(n , x , y , z ) :
dp =[0 ] *(n + 1 )
dp[0 ] = dp[1 ] = 0
for i in range(2 , n + 1 ) :
if(i % 2 == 1 ) :
dp[i ] = min(dp [(i + 1 ) // 2 ] + x + z , dp[i - 1 ] + y )
else :
dp[i ] = min(dp[i // 2 ] + x , dp[i - 1 ] + y )
return dp[n ]
n = 5
x = 2
y = 1
z = 3
print(minCost(n , x , y , z ) )
|
11,870 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<table style="width
Step1: Import and clean data
Set the directory path and look for all netcdf files that correspond to the model drivers and target.
Step2: Define a function to extract the variable values from each netCDF4 file. Variables are flattened from a 3 dimensional array to 1 dimensional version, pooling all values both spatially and temporily.
Don't know if this is the correct way to do this, but will come back to it once I understand the model (and its optimisation) better.
Step3: Execute the above function on all netCDF4 file paths.
Step4: Turn this into a dataframe for the analysis.
Step5: Check that we've built it correctly.
Step6: Export this to disk to be used by the analysis notebook - used gzip compression to save on space. Beware, because of there are approximation 10 million rows of data, this may take some time. | Python Code:
# data munging and analytical libraries
import re
import os
import numpy as np
import pandas as pd
from netCDF4 import Dataset
# graphical libraries
import matplotlib.pyplot as plt
%matplotlib inline
# set paths
outPath = "../data/globfire.csv"
Explanation: <table style="width: 100%; border-collapse: collapse;" border="0">
<tr>
<td><b>Created:</b> Monday 30 January 2017</td>
<td style="text-align: right;"><a href="https://www.github.com/rhyswhitley/fire_limitation">github.com/rhyswhitley/fire_limitation</td>
</tr>
</table>
<div>
<center>
<font face="Times">
<br>
<h1>Quantifying the uncertainity of a global fire limitation model using Bayesian inference</h1>
<h2>Part 1: Staging data for analysis</h2>
<br>
<br>
<sup>1,* </sup>Douglas Kelley,
<sup>2 </sup>Ioannis Bistinas,
<sup>3, 4 </sup>Chantelle Burton,
<sup>1 </sup>Tobias Marthews,
<sup>5 </sup>Rhys Whitley
<br>
<br>
<br>
<sup>1 </sup>Centre for Ecology and Hydrology, Maclean Building, Crowmarsh Gifford, Wallingford, Oxfordshire, United Kingdom
<br>
<sup>2 </sup>Vrije Universiteit Amsterdam, Faculty of Earth and Life Sciences, Amsterdam, Netherlands
<br>
<sup>3 </sup>Met Office United Kingdom, Exeter, United Kingdom
<br>
<sup>4 </sup>Geography, University of Exeter, Exeter, United Kingdom
<br>
<sup>5 </sup>Natural Perils Pricing, Commercial & Consumer Portfolio & Pricing, Suncorp Group, Sydney, Australia
<br>
<br>
<h3>Summary</h3>
<hr>
<p>
This notebook aims to process the separate netCDF4 files for the model drivers (X<sub>i=1, 2, ... M</sub>) and model target (Y) into a unified tabular data frame, exported as a compressed comma separated value (CSV) file. This file is subsequently used in the Bayesian inference study that forms the second notebook in this experiment. The advantage of the pre-processing the data separately to the analysis allows for it be quickly staged on demand. Of course other file formats may be more advantageous for greater compression (e.g. SQLite3 database file).
</p>
<br>
<b>You will need to run this notebook to prepare the dataest before you attempt the Bayesian analysis in Part 2</b>.
<br>
<br>
<br>
<i>Python code and calculations below</i>
<br>
<hr>
</font>
</center>
</div>
Load libraries
End of explanation
driver_paths = [os.path.join(dp, f) for (dp, _, fn) in os.walk("../data/raw/") for f in fn if f.endswith('.nc')]
driver_names = [re.search('^[a-zA-Z_]*', os.path.basename(fp)).group(0) for fp in driver_paths]
file_table = pd.DataFrame({'filepath': driver_paths, 'file_name': driver_names})
file_table
Explanation: Import and clean data
Set the directory path and look for all netcdf files that correspond to the model drivers and target.
End of explanation
def nc_extract(fpath):
print("Processing: {0}".format(fpath))
with Dataset(fpath, 'r') as nc_file:
gdata = np.array(nc_file.variables['variable'][:,:,:])
gflat = gdata.flatten()
if type(gdata) == np.ma.core.MaskedArray:
return gflat[~gflat.mask].data
else:
return gflat.data
Explanation: Define a function to extract the variable values from each netCDF4 file. Variables are flattened from a 3 dimensional array to 1 dimensional version, pooling all values both spatially and temporily.
Don't know if this is the correct way to do this, but will come back to it once I understand the model (and its optimisation) better.
End of explanation
values = [nc_extract(dp) for dp in driver_paths]
Explanation: Execute the above function on all netCDF4 file paths.
End of explanation
# turn list into a dataframe
fire_df = pd.DataFrame(np.array(values).T, columns=driver_names)
# replace null flags with pandas null
fire_df.replace(-3.4e38, np.nan, inplace=True)
# drop all null rows (are ocean and not needed in optim)
fire_df.dropna(inplace=True)
Explanation: Turn this into a dataframe for the analysis.
End of explanation
fire_df.head()
Explanation: Check that we've built it correctly.
End of explanation
savepath = os.path.expanduser(outPath)
fire_df.to_csv(savepath, index=False)
Explanation: Export this to disk to be used by the analysis notebook - used gzip compression to save on space. Beware, because of there are approximation 10 million rows of data, this may take some time.
End of explanation |
11,871 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Text-Fabric Api 활용예제 3
Step1: 성서 본문을 큰 단위의 word node가 아닌 각 단어 요소들로 잘라서 출력함
창세기 1
Step2: Text feature가 아닌 feature의 g_word_utf8의 값을 이용하여 첫 번째 word node 출력
Step4: 위를 응용하여 창세기 1
Step5: 창세기 1장 전체 출력 | Python Code:
from tf.fabric import Fabric
ETCBC = 'hebrew/etcbc4c'
PHONO = 'hebrew/phono'
TF = Fabric( modules=[ETCBC, PHONO], silent=False )
api = TF.load('''
book chapter verse
sp nu gn ps vt vs st
otype
det
g_word_utf8 trailer_utf8
lex_utf8 lex voc_utf8
g_prs_utf8 g_uvf_utf8
prs_gn prs_nu prs_ps g_cons_utf8
gloss
''')
api.makeAvailableIn(globals())
Explanation: Text-Fabric Api 활용예제 3
End of explanation
verseNode = T.nodeFromSection(('Genesis', 1, 2))
wordsNode = L.d(verseNode, otype='word')
print(wordsNode)
Explanation: 성서 본문을 큰 단위의 word node가 아닌 각 단어 요소들로 잘라서 출력함
창세기 1:2 연습
End of explanation
F.g_word_utf8.v(wordsNode[0])
Explanation: Text feature가 아닌 feature의 g_word_utf8의 값을 이용하여 첫 번째 word node 출력
End of explanation
절수 추가
verse = str(T.sectionFromNode(verseNode)[2])
for w in wordsNode:
verse += F.g_word_utf8.v(w)
if F.trailer_utf8.v(w):
verse += F.trailer_utf8.v(w)
print(verse)
Explanation: 위를 응용하여 창세기 1:2 전체를 반복문을 이용하여 출력
F.trailer_utf8은 글자 사이에 간격이 있는지, 혹은 특수 문자가 있는지를 판단하는 값이다. 따라서 문장을 이을 때 필수적
End of explanation
chpNode = T.nodeFromSection(('Genesis', 1))
verseNode = L.d(chpNode, otype='verse')
verse = ""
for v in verseNode:
verse += str(T.sectionFromNode(v)[2])
wordsNode = L.d(v, otype='word')
for w in wordsNode:
verse += F.g_word_utf8.v(w)
if F.trailer_utf8.v(w):
verse += F.trailer_utf8.v(w)
print(verse)
Explanation: 창세기 1장 전체 출력
End of explanation |
11,872 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out
Step1: Visualize the data
Step2: Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
Discriminator
The discriminator network is going to be a pretty typical linear classifier. To make this network a universal function approximator, we'll need at least one hidden layer, and these hidden layers should have one key attribute
Step3: Generator
The generator network will be almost exactly the same as the discriminator network, except that we're applying a tanh activation function to our output layer.
tanh Output
The generator has been found to perform the best with $tanh$ for the generator output, which scales the output to be between -1 and 1, instead of 0 and 1.
<img src='assets/tanh_fn.png' width=40% />
Recall that we also want these outputs to be comparable to the real input pixel values, which are read in as normalized values between 0 and 1.
So, we'll also have to scale our real input images to have pixel values between -1 and 1 when we train the discriminator.
I'll do this in the training loop, later on.
Step4: Model hyperparameters
Step5: Build complete network
Now we're instantiating the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
Step6: Discriminator and Generator Losses
Now we need to calculate the losses.
Discriminator Losses
For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss.
Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
<img src='assets/gan_pipeline.png' width=70% />
The losses will by binary cross entropy loss with logits, which we can get with BCEWithLogitsLoss. This combines a sigmoid activation function and and binary cross entropy loss in one function.
For the real images, we want D(real_images) = 1. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9. For this, we'll use the parameter smooth; if True, then we should smooth our labels. In PyTorch, this looks like labels = torch.ones(size) * 0.9
The discriminator loss for the fake data is similar. We want D(fake_images) = 0, where the fake images are the generator output, fake_images = G(z).
Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get D(fake_images) = 1. In this case, the labels are flipped to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
Step7: Optimizers
We want to update the generator and discriminator variables separately. So, we'll define two separate Adam optimizers.
Step8: Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions real_loss and fake_loss to help us calculate the discriminator losses in all of the following cases.
Discriminator training
Compute the discriminator loss on real, training images
Generate fake images
Compute the discriminator loss on fake, generated images
Add up real and fake loss
Perform backpropagation + an optimization step to update the discriminator's weights
Generator training
Generate fake images
Compute the discriminator loss on fake images, using flipped labels!
Perform backpropagation + an optimization step to update the generator's weights
Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
Step9: Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
Step10: Generator samples from training
Here we can view samples of images from the generator. First we'll look at the images we saved during training.
Step11: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
Step12: Below I'm showing the generated images as the network was training, every 10 epochs.
Step13: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples! | Python Code:
%matplotlib inline
import numpy as np
import torch
import matplotlib.pyplot as plt
from torchvision import datasets
import torchvision.transforms as transforms
# number of subprocesses to use for data loading
num_workers = 0
# how many samples per batch to load
batch_size = 64
# convert data to torch.FloatTensor
transform = transforms.ToTensor()
# get the training datasets
train_data = datasets.MNIST(root='data', train=True,
download=True, transform=transform)
# prepare data loader
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers)
Explanation: Generative Adversarial Network
In this notebook, we'll be building a generative adversarial network (GAN) trained on the MNIST dataset. From this, we'll be able to generate new handwritten digits!
GANs were first reported on in 2014 from Ian Goodfellow and others in Yoshua Bengio's lab. Since then, GANs have exploded in popularity. Here are a few examples to check out:
Pix2Pix
CycleGAN & Pix2Pix in PyTorch, Jun-Yan Zhu
A list of generative models
The idea behind GANs is that you have two networks, a generator $G$ and a discriminator $D$, competing against each other. The generator makes "fake" data to pass to the discriminator. The discriminator also sees real training data and predicts if the data it's received is real or fake.
The generator is trained to fool the discriminator, it wants to output data that looks as close as possible to real, training data.
The discriminator is a classifier that is trained to figure out which data is real and which is fake.
What ends up happening is that the generator learns to make data that is indistinguishable from real data to the discriminator.
<img src='assets/gan_pipeline.png' width=70% />
The general structure of a GAN is shown in the diagram above, using MNIST images as data. The latent sample is a random vector that the generator uses to construct its fake images. This is often called a latent vector and that vector space is called latent space. As the generator trains, it figures out how to map latent vectors to recognizable images that can fool the discriminator.
If you're interested in generating only new images, you can throw out the discriminator after training. In this notebook, I'll show you how to define and train these adversarial networks in PyTorch and generate new images!
End of explanation
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy()
# get one image from the batch
img = np.squeeze(images[0])
fig = plt.figure(figsize = (3,3))
ax = fig.add_subplot(111)
ax.imshow(img, cmap='gray')
Explanation: Visualize the data
End of explanation
import torch.nn as nn
import torch.nn.functional as F
class Discriminator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Discriminator, self).__init__()
# define all layers
def forward(self, x):
# flatten image
# pass x through all layers
# apply leaky relu activation to all hidden layers
return x
Explanation: Define the Model
A GAN is comprised of two adversarial networks, a discriminator and a generator.
Discriminator
The discriminator network is going to be a pretty typical linear classifier. To make this network a universal function approximator, we'll need at least one hidden layer, and these hidden layers should have one key attribute:
All hidden layers will have a Leaky ReLu activation function applied to their outputs.
<img src='assets/gan_network.png' width=70% />
Leaky ReLu
We should use a leaky ReLU to allow gradients to flow backwards through the layer unimpeded. A leaky ReLU is like a normal ReLU, except that there is a small non-zero output for negative input values.
<img src='assets/leaky_relu.png' width=40% />
Sigmoid Output
We'll also take the approach of using a more numerically stable loss function on the outputs. Recall that we want the discriminator to output a value 0-1 indicating whether an image is real or fake.
We will ultimately use BCEWithLogitsLoss, which combines a sigmoid activation function and and binary cross entropy loss in one function.
So, our final output layer should not have any activation function applied to it.
End of explanation
class Generator(nn.Module):
def __init__(self, input_size, hidden_dim, output_size):
super(Generator, self).__init__()
# define all layers
def forward(self, x):
# pass x through all layers
# final layer should have tanh applied
return x
Explanation: Generator
The generator network will be almost exactly the same as the discriminator network, except that we're applying a tanh activation function to our output layer.
tanh Output
The generator has been found to perform the best with $tanh$ for the generator output, which scales the output to be between -1 and 1, instead of 0 and 1.
<img src='assets/tanh_fn.png' width=40% />
Recall that we also want these outputs to be comparable to the real input pixel values, which are read in as normalized values between 0 and 1.
So, we'll also have to scale our real input images to have pixel values between -1 and 1 when we train the discriminator.
I'll do this in the training loop, later on.
End of explanation
# Discriminator hyperparams
# Size of input image to discriminator (28*28)
input_size =
# Size of discriminator output (real or fake)
d_output_size =
# Size of *last* hidden layer in the discriminator
d_hidden_size =
# Generator hyperparams
# Size of latent vector to give to generator
z_size =
# Size of discriminator output (generated image)
g_output_size =
# Size of *first* hidden layer in the generator
g_hidden_size =
Explanation: Model hyperparameters
End of explanation
# instantiate discriminator and generator
D = Discriminator(input_size, d_hidden_size, d_output_size)
G = Generator(z_size, g_hidden_size, g_output_size)
# check that they are as you expect
print(D)
print()
print(G)
Explanation: Build complete network
Now we're instantiating the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
End of explanation
# Calculate losses
def real_loss(D_out, smooth=False):
# compare logits to real labels
# smooth labels if smooth=True
loss =
return loss
def fake_loss(D_out):
# compare logits to fake labels
loss =
return loss
Explanation: Discriminator and Generator Losses
Now we need to calculate the losses.
Discriminator Losses
For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss.
Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that.
<img src='assets/gan_pipeline.png' width=70% />
The losses will by binary cross entropy loss with logits, which we can get with BCEWithLogitsLoss. This combines a sigmoid activation function and and binary cross entropy loss in one function.
For the real images, we want D(real_images) = 1. That is, we want the discriminator to classify the the real images with a label = 1, indicating that these are real. To help the discriminator generalize better, the labels are reduced a bit from 1.0 to 0.9. For this, we'll use the parameter smooth; if True, then we should smooth our labels. In PyTorch, this looks like labels = torch.ones(size) * 0.9
The discriminator loss for the fake data is similar. We want D(fake_images) = 0, where the fake images are the generator output, fake_images = G(z).
Generator Loss
The generator loss will look similar only with flipped labels. The generator's goal is to get D(fake_images) = 1. In this case, the labels are flipped to represent that the generator is trying to fool the discriminator into thinking that the images it generates (fakes) are real!
End of explanation
import torch.optim as optim
# learning rate for optimizers
lr = 0.002
# Create optimizers for the discriminator and generator
d_optimizer =
g_optimizer =
Explanation: Optimizers
We want to update the generator and discriminator variables separately. So, we'll define two separate Adam optimizers.
End of explanation
import pickle as pkl
# training hyperparams
num_epochs = 40
# keep track of loss and generated, "fake" samples
samples = []
losses = []
print_every = 400
# Get some fixed data for sampling. These are images that are held
# constant throughout training, and allow us to inspect the model's performance
sample_size=16
fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
fixed_z = torch.from_numpy(fixed_z).float()
# train the network
D.train()
G.train()
for epoch in range(num_epochs):
for batch_i, (real_images, _) in enumerate(train_loader):
batch_size = real_images.size(0)
## Important rescaling step ##
real_images = real_images*2 - 1 # rescale input images from [0,1) to [-1, 1)
# ============================================
# TRAIN THE DISCRIMINATOR
# ============================================
# 1. Train with real images
# Compute the discriminator losses on real images
# use smoothed labels
# 2. Train with fake images
# Generate fake images
z = np.random.uniform(-1, 1, size=(batch_size, z_size))
z = torch.from_numpy(z).float()
fake_images = G(z)
# Compute the discriminator losses on fake images
# add up real and fake losses and perform backprop
d_loss =
# =========================================
# TRAIN THE GENERATOR
# =========================================
# 1. Train with fake images and flipped labels
# Generate fake images
# Compute the discriminator losses on fake images
# using flipped labels!
# perform backprop
g_loss =
# Print some loss stats
if batch_i % print_every == 0:
# print discriminator and generator loss
print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
epoch+1, num_epochs, d_loss.item(), g_loss.item()))
## AFTER EACH EPOCH##
# append discriminator loss and generator loss
losses.append((d_loss.item(), g_loss.item()))
# generate and save sample, fake images
G.eval() # eval mode for generating samples
samples_z = G(fixed_z)
samples.append(samples_z)
G.train() # back to train mode
# Save training generator samples
with open('train_samples.pkl', 'wb') as f:
pkl.dump(samples, f)
Explanation: Training
Training will involve alternating between training the discriminator and the generator. We'll use our functions real_loss and fake_loss to help us calculate the discriminator losses in all of the following cases.
Discriminator training
Compute the discriminator loss on real, training images
Generate fake images
Compute the discriminator loss on fake, generated images
Add up real and fake loss
Perform backpropagation + an optimization step to update the discriminator's weights
Generator training
Generate fake images
Compute the discriminator loss on fake images, using flipped labels!
Perform backpropagation + an optimization step to update the generator's weights
Saving Samples
As we train, we'll also print out some loss statistics and save some generated "fake" samples.
End of explanation
fig, ax = plt.subplots()
losses = np.array(losses)
plt.plot(losses.T[0], label='Discriminator')
plt.plot(losses.T[1], label='Generator')
plt.title("Training Losses")
plt.legend()
Explanation: Training loss
Here we'll plot the training losses for the generator and discriminator, recorded after each epoch.
End of explanation
# helper function for viewing a list of passed in sample images
def view_samples(epoch, samples):
fig, axes = plt.subplots(figsize=(7,7), nrows=4, ncols=4, sharey=True, sharex=True)
for ax, img in zip(axes.flatten(), samples[epoch]):
img = img.detach()
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
im = ax.imshow(img.reshape((28,28)), cmap='Greys_r')
# Load samples from generator, taken while training
with open('train_samples.pkl', 'rb') as f:
samples = pkl.load(f)
Explanation: Generator samples from training
Here we can view samples of images from the generator. First we'll look at the images we saved during training.
End of explanation
# -1 indicates final epoch's samples (the last in the list)
view_samples(-1, samples)
Explanation: These are samples from the final training epoch. You can see the generator is able to reproduce numbers like 1, 7, 3, 2. Since this is just a sample, it isn't representative of the full range of images this generator can make.
End of explanation
rows = 10 # split epochs into 10, so 100/10 = every 10 epochs
cols = 6
fig, axes = plt.subplots(figsize=(7,12), nrows=rows, ncols=cols, sharex=True, sharey=True)
for sample, ax_row in zip(samples[::int(len(samples)/rows)], axes):
for img, ax in zip(sample[::int(len(sample)/cols)], ax_row):
img = img.detach()
ax.imshow(img.reshape((28,28)), cmap='Greys_r')
ax.xaxis.set_visible(False)
ax.yaxis.set_visible(False)
Explanation: Below I'm showing the generated images as the network was training, every 10 epochs.
End of explanation
# randomly generated, new latent vectors
sample_size=16
rand_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
rand_z = torch.from_numpy(rand_z).float()
G.eval() # eval mode
# generated samples
rand_images = G(rand_z)
# 0 indicates the first set of samples in the passed in list
# and we only have one batch of samples, here
view_samples(0, [rand_images])
Explanation: It starts out as all noise. Then it learns to make only the center white and the rest black. You can start to see some number like structures appear out of the noise like 1s and 9s.
Sampling from the generator
We can also get completely new images from the generator by using the checkpoint we saved after training. We just need to pass in a new latent vector $z$ and we'll get new samples!
End of explanation |
11,873 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Laboratoire d'introduction au filtrage - Corrigé
Cours NSC-2006, année 2015
Méthodes quantitatives en neurosciences
Pierre Bellec, Yassine Ben Haj Ali
Objectifs
Step1: Section 1
Step2: Représentez noyau et signal en temps, à l'aide de la commande plot. Utiliser les temps d'acquisition corrects, et labéliser les axes (xlabel, ylabel). Comment est généré signal? reconnaissez vous le processus employé? Est ce que le signal est périodique? si oui, quelle est sa période? Peut-on trouver la réponse dans le code?
Step3: A partir du graphe du noyau, on reconnait la fonction de réponse hémodynamique utilisée lors du laboratoire sur la convolution. Le signal est généré par convolution du noyau avec un vecteur bloc (ligne 18 du bloc de code initial). On voit que bloc est créé en assemblant deux vecteurs de 15 zéros et de 15 uns (ligne 7). Le signal est donc périodique. Comme la fréquence d'acquisition est de 1 Hz (ligne 9 définissant les échantillons temporels, on voit un pas de 1/freq, avec freq=1, ligne 5), la période du signal est de 30 secondes, soit une fréquence de 0.033Hz. On peut confirmer cela visuellement sur le graphe.
2. Représenter le contenu fréquentiel de signal avec la commande Analyse_Frequence_Puissance.
Utilisez la commande ylim pour ajuster les limites de l'axe y et pouvoir bien observer le signal. Notez que l'axe y (puissance) est en échelle log (dB). Quelles sont les fréquences principales contenues dans le signal? Etait-ce attendu?
Step4: Comme attendu, étant donné le caractère périodique de période 30s du signal, la fréquence principale est de 0.033 Hz. Les pics suivants sont situés à 0.1 Hz et 0.166 Hz.
3. Répétez les questions 1.1 et 1.2 avec un bruit dit blanc, généré ci dessous.
Step5: Pourquoi est ce que ce bruit porte ce nom?
Step6: Le vecteur bruit est généré à l'aide de la fonction randn, qui est un générateur pseudo-aléatoires d'échantillons indépendants Gaussiens. Le spectre de puissance représente l'amplitude de la contribution de chaque fréquence au signal. On peut également décomposer une couleur en fréquences. Quand toutes les fréquences sont présentes, et en proportion similaire, on obtient du blanc. Le bruit Gaussien a un spectre de puissance plat (hormis de petites variations aléatoires), ce qui lui vaut son surnom de bruit blanc.
4. Bruit respiratoire.
Répétez les les questions 1.1 et 1.2 avec un bruit dit respiratoire, généré ci dessous.
Step7: Est ce une simulation raisonnable de variations liées à la respiration? pourquoi?
On voit que ce signal est essentiellement composé de fréquences autour de 0.3Hz. Cela était déjà apparent avec l'introduction d'un cosinus de fréquence 0.3Hz dans la génération (ligne 7). L'amplitude de ce cosinus est elle-même modulée par un autre cosinus, plus lent (ligne 11). D'aprés wikipedia, un adulte respire de 16 à 20 fois par minutes, soit une fréquence de 0.26 à 0.33Hz (en se ramenant en battements par secondes). Cette simulation utilise donc une fréquence raisonnable pour simuler la respiration.
5. Ligne de base.
Répétez les les questions 1.1 et 1.2 avec une dérive de la ligne de base, telle que générée ci dessous.
Step8: Le vecteur base est une fonction linéaire du temps (ligne 4). En représentation fréquentielle, il s'agit d'un signal essentiellement basse fréquence.
6. Mélange de signaux.
On va maintenant mélanger nos différentes signaux, tel qu'indiqué ci-dessous. Représentez les trois mélanges en temps et en fréquence, superposé au signal d'intérêt sans aucun bruit (variable signal). Pouvez-vous reconnaitre la contribution de chaque source dans le mélange fréquentiel? Est ce que les puissances de fréquences s'additionnent systématiquement?
Step9: On reconnait clairement la série de pics qui composent la variable signal auquelle vient se rajouter les fréquences de la variable resp, à 0.3 Hz. Notez que les spectres de puissance ne s'additionnent pas nécessairement, cela dépend si, à une fréquence donnée, les signaux que l'on additionne sont ou non en phase.
Step10: L'addition du bruit blanc ajoute des variations aléatoires dans la totalité du spectre et, hormis les pics du spectre associé à signal, il devient difficile de distinguer la contribution de resp.
Step11: Section 2
Step12: Représentez le noyau en fréquence (avec Analyse_Frequence_Puissance), commentez sur l'impact fréquentiel de la convolution. Faire un deuxième graphe représentant le signal d'intérêt superposé au signal filtré.
Step13: On voit que cette convolution supprime exactement la fréquence correspondant à la largeur du noyau (3 secondes). Il se trouve que cette fréquence est aussi trés proche de la fréquence respiratoire choisie! Visuellement, le signal filtré est très proche du signal original. La mesure d'erreur (tel que demandée dans la question 2.2. ci dessous est de 3%.
2.2 Répétez la question 2.1 avec un noyau plus gros.
Commentez qualitativement sur la qualité du débruitage.
Step14: On voit que ce noyau, en plus de supprimer une fréquence légèrement au dessus de 0.3 Hz, supprime aussi une fréquence proche de 0.16 Hz. C'était l'un des pics que l'on avait identifié dans le spectre de signal. De fait, dans la représentation temporelle, on voit que le signal filtré (en noir) est dégradé
Step15: Représentez le noyau en temps et en fréquence. Quelle est la fréquence de coupure du filtre?
Step16: On observe une réduction importante de l'amplitude des fréquences inférieures à 0.1 Hz, qui correspond donc à la fréquence de coupure du filtre.
2.4. Application du filtre de Butterworth.
L'exemple ci dessous filtre le signal avec un filtre passe bas, avec une fréquence de coupure de 0.1. Faire un graphe représentant le signal d'intérêt (signal) superposé au signal filtré. Calculez l'erreur résiduelle, et comparez au filtre par moyenne mobile évalué précédemment.
Step17: Avec une fréquence de coupure de 0.1 Hz, on perd de nombreux pics de signal, notamment celui situé à 0.16Hz. Effectivement dans la représentation en temps on voit que les variations rapides de signal sont perdues, et l'erreur résiduelle est de 6%.
2.5. Optimisation du filtre de Butterworth.
Trouvez une combinaison de filtre passe-haut et de filtre passe-bas de Butterworth qui permette d'améliorer l'erreur résiduelle par rapport au filtre de moyenne mobile. Faire un graphe représentant le signal d'intérêt (signal) superposé au signal filtré, et un second avec le signal d'intérêt superposé au signal bruité, pour référence. | Python Code:
%matplotlib inline
from pymatbridge import Octave
octave = Octave()
octave.start()
%load_ext pymatbridge
Explanation: Laboratoire d'introduction au filtrage - Corrigé
Cours NSC-2006, année 2015
Méthodes quantitatives en neurosciences
Pierre Bellec, Yassine Ben Haj Ali
Objectifs:
Ce laboratoire a pour but de vous initier au filtrage de signaux temporels avec Matlab. Nous allons travailler avec un signal simulé qui contient plusieurs sources, une d'intérêt et d'autres qui sont du bruit.
- Nous allons tout d'abord nous familiariser avec les différentes sources de signal, en temps et en fréquence.
- Nous allons ensuite chercher un filtrage qui permette d'éliminer le bruit sans altérer de maniére forte le signal.
- Enfin, nous évaluerons l'impact d'une perte de résolution temporelle sur notre capacité à débruiter le signal, lié au phénomène de repliement de fréquences (aliasing).
Pour réaliser ce laboratoire, il est nécessaire de récupérer la
ressource suivante sur studium:
labo7_filtrage.zip: cette archive contient plusieurs codes et jeux de données. SVP décompressez l'archive et copiez les fichiers dans votre répertoire de travail Matlab.
De nombreuses portions du labo consiste à modifier un code réalisé dans une autre question. Il est donc fortement conseillé d'ouvrir un nouveau fichier dans l'éditeur matlab, et d'exécuter le code depuis l'éditeur, de façon à pouvoir copier des paragraphes de code rapidement. Ne pas tenir compte et ne pas exécuter cette partie du code:
End of explanation
%%matlab
%% Définition du signal d'intêret
% fréquence du signal
freq = 1;
% on crée des blocs off/on de 15 secondes
bloc = repmat([zeros(1,15*freq) ones(1,15*freq)],[1 10]);
% les temps d'acquisition
ech = (0:(1/freq):(length(bloc)/freq)-(1/freq));
% ce paramètre fixe le pic de la réponse hémodynamique
pic = 5;
% noyau de réponse hémodynamique
noyau = [linspace(0,1,(pic*freq)+1) linspace(1,-0.3,(pic*freq)/2) linspace(-0.3,0,(pic*freq)/2)];
noyau = [zeros(1,length(noyau)-1) noyau];
% normalisation du noyau
noyau = noyau/sum(abs(noyau));
% convolution du bloc avec le noyau
signal = conv(bloc,noyau,'same');
% on fixe la moyenne de la réponse à zéro
signal = signal - mean(signal);
Explanation: Section 1: Exemple de signaux, temps et fréquence
1. Commençons par générer un signal d'intérêt:
End of explanation
%%matlab
%% représentation en temps
% Nouvelle figure
figure
% On commence par tracer le noyau
plot(noyau,'-bo')
% Nouvelle figure
figure
% On trace le signal, en utilisant ech pour spécifier les échantillons temporels
plot(ech,signal)
% Les fonctions xlim et ylim permettent d'ajuster les valeurs min/max des axes
xlim([-1 max(ech)+1])
ylim([-0.6 0.7])
% Les fonctions xlabel et ylabel permettent de labéliser les axes
xlabel('Temps (s)')
ylabel('a.u')
Explanation: Représentez noyau et signal en temps, à l'aide de la commande plot. Utiliser les temps d'acquisition corrects, et labéliser les axes (xlabel, ylabel). Comment est généré signal? reconnaissez vous le processus employé? Est ce que le signal est périodique? si oui, quelle est sa période? Peut-on trouver la réponse dans le code?
End of explanation
%%matlab
%% représentation en fréquences
% Nouvelle figure
figure
% La fonction utilise le signal comme premier argument, et les échantillons temporels comme deuxième
Analyse_Frequence_Puissance(signal,ech);
% On ajuste l'échelle de l'axe y.
ylim([10^(-10) 1])
Explanation: A partir du graphe du noyau, on reconnait la fonction de réponse hémodynamique utilisée lors du laboratoire sur la convolution. Le signal est généré par convolution du noyau avec un vecteur bloc (ligne 18 du bloc de code initial). On voit que bloc est créé en assemblant deux vecteurs de 15 zéros et de 15 uns (ligne 7). Le signal est donc périodique. Comme la fréquence d'acquisition est de 1 Hz (ligne 9 définissant les échantillons temporels, on voit un pas de 1/freq, avec freq=1, ligne 5), la période du signal est de 30 secondes, soit une fréquence de 0.033Hz. On peut confirmer cela visuellement sur le graphe.
2. Représenter le contenu fréquentiel de signal avec la commande Analyse_Frequence_Puissance.
Utilisez la commande ylim pour ajuster les limites de l'axe y et pouvoir bien observer le signal. Notez que l'axe y (puissance) est en échelle log (dB). Quelles sont les fréquences principales contenues dans le signal? Etait-ce attendu?
End of explanation
%%matlab
%% définition du bruit blanc
bruit = 0.05*randn(size(signal));
Explanation: Comme attendu, étant donné le caractère périodique de période 30s du signal, la fréquence principale est de 0.033 Hz. Les pics suivants sont situés à 0.1 Hz et 0.166 Hz.
3. Répétez les questions 1.1 et 1.2 avec un bruit dit blanc, généré ci dessous.
End of explanation
%%matlab
% Ce code n'est pas commenté, car essentiellement identique
% à ceux présentés en question 1.1. et 1.2.
%% représentation en temps
figure
plot(ech,bruit)
ylim([-0.6 0.7])
xlabel('Temps (s)')
ylabel('a.u')
%% représentation en fréquences
figure
Analyse_Frequence_Puissance(bruit,ech);
ylim([10^(-10) 1])
Explanation: Pourquoi est ce que ce bruit porte ce nom?
End of explanation
%%matlab
%% définition du signal de respiration
% fréquence de la respiration
freq_resp = 0.3;
% un modéle simple (cosinus) des fluctuations liées à la respiration
resp = cos(2*pi*freq_resp*ech/freq);
% fréquence de modulation lente de l'amplitude respiratoire
freq_mod = 0.01;
% modulation de l'amplitude du signal lié à la respiration
resp = resp.*(ones(size(resp))-0.1*cos(2*pi*freq_mod*ech/freq));
% on force une moyenne nulle, et une amplitude max de 0.1
resp = 0.1*(resp-mean(resp));
%%matlab
% Ce code n'est pas commenté, car essentiellement identique
% à ceux présentés en question 1.1. et 1.2.
%% représentation en temps
figure
plot(ech,resp)
xlim([-1 max(ech)/2+1])
xlabel('Temps (s)')
ylabel('a.u')
%% représentation en fréquences
figure
[ech_f,signal_f,signal_af,signal_pu] = Analyse_Frequence_Puissance(resp,ech);
set(gca,'yscale','log');
ylim([10^(-35) 1])
Explanation: Le vecteur bruit est généré à l'aide de la fonction randn, qui est un générateur pseudo-aléatoires d'échantillons indépendants Gaussiens. Le spectre de puissance représente l'amplitude de la contribution de chaque fréquence au signal. On peut également décomposer une couleur en fréquences. Quand toutes les fréquences sont présentes, et en proportion similaire, on obtient du blanc. Le bruit Gaussien a un spectre de puissance plat (hormis de petites variations aléatoires), ce qui lui vaut son surnom de bruit blanc.
4. Bruit respiratoire.
Répétez les les questions 1.1 et 1.2 avec un bruit dit respiratoire, généré ci dessous.
End of explanation
%%matlab
%% définition de la ligne de base
base = 0.1*(ech-mean(ech))/mean(ech);
%%matlab
% Ce code n'est pas commenté, car essentiellement identique
% à ceux présentés en question 1.1. et 1.2.
%% représentation en temps
figure
plot(ech,base)
xlim([-1 max(ech)+1])
ylim([-0.6 0.7])
xlabel('Temps (s)')
ylabel('a.u')
%% représentation en fréquence
figure
[ech_f,base_f,base_af,base_pu] = Analyse_Frequence_Puissance(base,ech);
ylim([10^(-10) 1])
Explanation: Est ce une simulation raisonnable de variations liées à la respiration? pourquoi?
On voit que ce signal est essentiellement composé de fréquences autour de 0.3Hz. Cela était déjà apparent avec l'introduction d'un cosinus de fréquence 0.3Hz dans la génération (ligne 7). L'amplitude de ce cosinus est elle-même modulée par un autre cosinus, plus lent (ligne 11). D'aprés wikipedia, un adulte respire de 16 à 20 fois par minutes, soit une fréquence de 0.26 à 0.33Hz (en se ramenant en battements par secondes). Cette simulation utilise donc une fréquence raisonnable pour simuler la respiration.
5. Ligne de base.
Répétez les les questions 1.1 et 1.2 avec une dérive de la ligne de base, telle que générée ci dessous.
End of explanation
%%matlab
%% Mélanges de signaux
y_sr = signal + resp;
y_srb = signal + resp + bruit;
y_srbb = signal + resp + bruit + base;
%%matlab
% Ce code n'est pas commenté, car essentiellement identique
% à ceux présentés en question 1.1. et 1.2.
% notez tout de même l'utilisation d'un hold on pour superposer la variable `signal` (sans bruit)
% au mélange de signaux.
y = y_sr;
% représentation en temps
figure
plot(ech,y)
hold on
plot(ech,signal,'r')
xlim([-1 301])
ylim([-0.8 0.8])
xlabel('Temps (s)')
ylabel('a.u')
% représentation en fréquence
figure
Analyse_Frequence_Puissance(y,ech);
ylim([10^(-10) 1])
Explanation: Le vecteur base est une fonction linéaire du temps (ligne 4). En représentation fréquentielle, il s'agit d'un signal essentiellement basse fréquence.
6. Mélange de signaux.
On va maintenant mélanger nos différentes signaux, tel qu'indiqué ci-dessous. Représentez les trois mélanges en temps et en fréquence, superposé au signal d'intérêt sans aucun bruit (variable signal). Pouvez-vous reconnaitre la contribution de chaque source dans le mélange fréquentiel? Est ce que les puissances de fréquences s'additionnent systématiquement?
End of explanation
%%matlab
% Idem au code précédent, y_sr est remplacé par y_srb dans la ligne suivante.
y = y_srb;
% représentation en temps
figure
plot(ech,y)
hold on
plot(ech,signal,'r')
xlim([-1 301])
ylim([-0.8 0.8])
xlabel('Temps (s)')
ylabel('a.u')
% représentation en fréquence
figure
[freq_f,y_f,y_af,y_pu] = Analyse_Frequence_Puissance(y,ech);
ylim([10^(-10) 1])
Explanation: On reconnait clairement la série de pics qui composent la variable signal auquelle vient se rajouter les fréquences de la variable resp, à 0.3 Hz. Notez que les spectres de puissance ne s'additionnent pas nécessairement, cela dépend si, à une fréquence donnée, les signaux que l'on additionne sont ou non en phase.
End of explanation
%%matlab
% Idem au code précédent, y_srb est remplacé par y_srbb dans la ligne suivante.
y = y_srbb;
% représentation en temps
figure
plot(ech,y)
hold on
plot(ech,signal,'r')
xlim([-1 301])
ylim([-0.8 0.8])
xlabel('Temps (s)')
ylabel('a.u')
% représentation en fréquence
figure
[freq_f,y_f,y_af,y_pu] = Analyse_Frequence_Puissance(y,ech);
ylim([10^(-10) 1])
Explanation: L'addition du bruit blanc ajoute des variations aléatoires dans la totalité du spectre et, hormis les pics du spectre associé à signal, il devient difficile de distinguer la contribution de resp.
End of explanation
%%matlab
%%définition d'un noyau de moyenne mobile
% taille de la fenêtre pour la moyenne mobile, en nombre d'échantillons temporels
taille = ceil(3*freq);
% le noyau, défini sur une fenêtre identique aux signaux précédents
noyau = [zeros(1,(length(signal)-taille-1)/2) ones(1,taille) zeros(1,(length(signal)-taille-1)/2)];
% normalisation du moyau
noyau = noyau/sum(abs(noyau));
% convolution avec le noyau (filtrage)
y_f = conv(y_sr,noyau,'same');
Explanation: Section 2: Optimisation de filtre
2.1. Nous allons commencer par appliquer un filtre de moyenne mobile, avec le signal le plus simple (y_sr).
Pour cela on crée un noyau et on applique une convolution, comme indiqué ci dessous.
End of explanation
%%matlab
%% Représentation fréquentielle du filtre
figure
% représentation fréquentielle du noyau
Analyse_Frequence_Puissance(noyau,ech);
ylim([10^(-10) 1])
%% représentation du signal filtré
figure
% signal aprés filtrage
plot(ech,y_f,'k')
hold on
% signal sans bruit
plot(ech,signal,'r')
%% erreur résiduelle
err = sqrt(mean((signal-y_f).^2))
Explanation: Représentez le noyau en fréquence (avec Analyse_Frequence_Puissance), commentez sur l'impact fréquentiel de la convolution. Faire un deuxième graphe représentant le signal d'intérêt superposé au signal filtré.
End of explanation
%%matlab
% taille de la fenêtre pour la moyenne mobile, en nombre d'échantillons temporels
% On passe de 3 à 7
% ATTENTION: sous matlab, ce code ne marche qu'avec des noyaux de taille impaire
taille = ceil(6*freq);
% le noyau, défini sur une fenêtre identique aux signaux précédents
noyau = [zeros(1,(length(signal)-taille-1)/2) ones(1,taille) zeros(1,(length(signal)-taille-1)/2)];
% normalisation du moyau
noyau = noyau/sum(abs(noyau));
% convolution avec le noyau (filtrage)
y_f = conv(y_sr,noyau,'same');
%% Représentation fréquentielle du filtre
figure
Analyse_Frequence_Puissance(noyau,ech);
ylim([10^(-10) 1])
%% représentation du signal filtré
figure
plot(ech,y_f,'k')
hold on
plot(ech,signal,'r')
%% erreur résiduelle
err = sqrt(mean((signal-y_f).^2))
Explanation: On voit que cette convolution supprime exactement la fréquence correspondant à la largeur du noyau (3 secondes). Il se trouve que cette fréquence est aussi trés proche de la fréquence respiratoire choisie! Visuellement, le signal filtré est très proche du signal original. La mesure d'erreur (tel que demandée dans la question 2.2. ci dessous est de 3%.
2.2 Répétez la question 2.1 avec un noyau plus gros.
Commentez qualitativement sur la qualité du débruitage.
End of explanation
%%matlab
%% Définition d'une implusion finie unitaire
impulsion = zeros(size(signal));
impulsion(round(length(impulsion)/2))=1;
noyau = FiltrePasseHaut(impulsion,freq,0.1);
Explanation: On voit que ce noyau, en plus de supprimer une fréquence légèrement au dessus de 0.3 Hz, supprime aussi une fréquence proche de 0.16 Hz. C'était l'un des pics que l'on avait identifié dans le spectre de signal. De fait, dans la représentation temporelle, on voit que le signal filtré (en noir) est dégradé: les fluctuations rapides du signal rouge sont perdues. Et effectivement, on a maintenant une erreur résiduelle de 7.6%, supérieure au 3% du filtre précédent.
2.3 Nous allons maintenant appliquer des filtres de Butterworth.
Ces filtres sont disponibles dans des fonctions que vous avez déjà utilisé lors du laboratoire sur la transformée de Fourier:
- FiltrePasseHaut.m: suppression des basses fréquences.
- FiltrePasseBas.m: suppression des hautes fréquences.
Le filtre de Butterworth n'utilise pas explicitement un noyau de convolution. Mais comme il s'agit d'un systéme linéaire invariant dans le temps, on peut toujours récupérer le noyau en regardant la réponse à une impulsion finie unitaire.
End of explanation
%%matlab
%% représentation temporelle
figure
plot(ech,noyau)
xlabel('Temps (s)')
ylabel('a.u')
%% représentation fréquentielle
figure
Analyse_Frequence_Puissance(noyau,ech);
set(gca,'yscale','log');
Explanation: Représentez le noyau en temps et en fréquence. Quelle est la fréquence de coupure du filtre?
End of explanation
%%matlab
y = y_sr;
y_f = FiltrePasseBas(y,freq,0.1);
%%représentation du signal filtré
plot(ech,signal,'r')
hold on
plot(ech,y_f,'k')
err = sqrt(mean((signal-y_f).^2))
Explanation: On observe une réduction importante de l'amplitude des fréquences inférieures à 0.1 Hz, qui correspond donc à la fréquence de coupure du filtre.
2.4. Application du filtre de Butterworth.
L'exemple ci dessous filtre le signal avec un filtre passe bas, avec une fréquence de coupure de 0.1. Faire un graphe représentant le signal d'intérêt (signal) superposé au signal filtré. Calculez l'erreur résiduelle, et comparez au filtre par moyenne mobile évalué précédemment.
End of explanation
%%matlab
y = y_sr;
%% filtre de Butterworth
% on combine une passe-haut et un passe-bas, de maniére à retirer uniquement les fréquences autour de 0.3 Hz
y_f = FiltrePasseHaut(y,freq,0.35);
y_f = y_f+FiltrePasseBas(y,freq,0.25);
%% représentation du signal filtré
figure
plot(ech,signal,'r')
hold on
plot(ech,y_f,'k')
err = sqrt(mean((signal-y_f).^2))
%% représentation du signal brut
figure
plot(ech,signal,'r')
hold on
plot(ech,y,'k')
Explanation: Avec une fréquence de coupure de 0.1 Hz, on perd de nombreux pics de signal, notamment celui situé à 0.16Hz. Effectivement dans la représentation en temps on voit que les variations rapides de signal sont perdues, et l'erreur résiduelle est de 6%.
2.5. Optimisation du filtre de Butterworth.
Trouvez une combinaison de filtre passe-haut et de filtre passe-bas de Butterworth qui permette d'améliorer l'erreur résiduelle par rapport au filtre de moyenne mobile. Faire un graphe représentant le signal d'intérêt (signal) superposé au signal filtré, et un second avec le signal d'intérêt superposé au signal bruité, pour référence.
End of explanation |
11,874 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<p><font size="6"><b>Pandas
Step1: Combining data is essential functionality in a data analysis workflow.
Data is distributed in multiple files, different information needs to be merged, new data is calculated, .. and needs to be added together. Pandas provides various facilities for easily combining together Series and DataFrame objects
Step2: Adding columns
As we already have seen before, adding a single column is very easy
Step3: Adding multiple columns at once is also possible. For example, the following method gives us a DataFrame of two columns
Step4: We can add both at once to the dataframe
Step5: Concatenating data
The pd.concat function does all of the heavy lifting of combining data in different ways.
pd.concat takes a list or dict of Series/DataFrame objects and concatenates them in a certain direction (axis) with some configurable handling of “what to do with the other axes”.
Combining rows - pd.concat
Assume we have some similar data as in countries, but for a set of different countries
Step6: We now want to combine the rows of both datasets
Step7: If we don't want the index to be preserved
Step8: When the two dataframes don't have the same set of columns, by default missing values get introduced
Step9: We can also pass a dictionary of objects instead of a list of objects. Now the keys of the dictionary are preserved as an additional index level
Step10: <div class="alert alert-info">
**NOTE**
Step11: Assume we have another dataframe with more information about the 'Embarked' locations
Step12: We now want to add those columns to the titanic dataframe, for which we can use pd.merge, specifying the column on which we want to merge the two datasets
Step13: In this case we use how='left (a "left join") because we wanted to keep the original rows of df and only add matching values from locations to it. Other options are 'inner', 'outer' and 'right' (see the docs for more on this, or this visualization
Step14: SQLite (https
Step15: Pandas provides functionality to query data from a database. Let's fetch the main dataset contained in this file
Step16: More information about the identifyer variables (the first three columns) can be found in the other tables. For example, the "CD_LGL_PSN_VAT" column contains information about the legal form of the enterprise. What the values in this column mean, can be found in a different table
Step17: This type of data organization is called a "star schema" (https
Step18: <div class="alert alert-success">
**EXERCISE 2**
Step19: <div class="alert alert-success">
**EXERCISE 3**
Step20: Joining with spatial data to make a map
The course materials contains a simplified version of the "statistical sectors" dataset (https
Step21: The resulting dataframe (a GeoDataFrame) has a "geometry" column (in this case with polygons representing the borders of the municipalities), and a couple of new methods with geospatial functionality (for example, the plot() method by default makes a map). It is still a DataFrame, and everything we have learned about pandas can be used here as well.
Let's visualize the change in number of registered enterprises on a map at the municipality-level.
We first calculate the total number of (existing/starting/stopping) enterprises per municipality
Step22: And add a new column with the relative change in the number of registered enterprises
Step23: We can now merge the dataframe with the geospatial information of the municipalities with the dataframe with the enterprise numbers
Step24: With this joined dataframe, we can make a new map, now visualizing the change in number of registered enterprises ("NUM_VAT_CHANGE")
Step25: Combining columns - pd.concat with axis=1
We can use pd.merge to combine the columns of two DataFrame based on a common column. If our two DataFrames already have equivalent rows, we can also achieve this basic case using pd.concat with specifying axis=1 (or axis="columns").
Assume we have another DataFrame for the same countries, but with some additional statistics
Step26: pd.concat matches the different objects based on the index | Python Code:
import pandas as pd
Explanation: <p><font size="6"><b>Pandas: Combining datasets Part I - concat</b></font></p>
© 2021, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
# redefining the example objects
# series
population = pd.Series({'Germany': 81.3, 'Belgium': 11.3, 'France': 64.3,
'United Kingdom': 64.9, 'Netherlands': 16.9})
# dataframe
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
Explanation: Combining data is essential functionality in a data analysis workflow.
Data is distributed in multiple files, different information needs to be merged, new data is calculated, .. and needs to be added together. Pandas provides various facilities for easily combining together Series and DataFrame objects
End of explanation
pop_density = countries['population']*1e6 / countries['area']
pop_density
countries['pop_density'] = pop_density
countries
Explanation: Adding columns
As we already have seen before, adding a single column is very easy:
End of explanation
countries["country"].str.split(" ", expand=True)
Explanation: Adding multiple columns at once is also possible. For example, the following method gives us a DataFrame of two columns:
End of explanation
countries[['first', 'last']] = countries["country"].str.split(" ", expand=True)
countries
Explanation: We can add both at once to the dataframe:
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
data = {'country': ['Nigeria', 'Rwanda', 'Egypt', 'Morocco', ],
'population': [182.2, 11.3, 94.3, 34.4],
'area': [923768, 26338 , 1010408, 710850],
'capital': ['Abuja', 'Kigali', 'Cairo', 'Rabat']}
countries_africa = pd.DataFrame(data)
countries_africa
Explanation: Concatenating data
The pd.concat function does all of the heavy lifting of combining data in different ways.
pd.concat takes a list or dict of Series/DataFrame objects and concatenates them in a certain direction (axis) with some configurable handling of “what to do with the other axes”.
Combining rows - pd.concat
Assume we have some similar data as in countries, but for a set of different countries:
End of explanation
pd.concat([countries, countries_africa])
Explanation: We now want to combine the rows of both datasets:
End of explanation
pd.concat([countries, countries_africa], ignore_index=True)
Explanation: If we don't want the index to be preserved:
End of explanation
pd.concat([countries, countries_africa[['country', 'capital']]], ignore_index=True)
Explanation: When the two dataframes don't have the same set of columns, by default missing values get introduced:
End of explanation
pd.concat({'europe': countries, 'africa': countries_africa})
Explanation: We can also pass a dictionary of objects instead of a list of objects. Now the keys of the dictionary are preserved as an additional index level:
End of explanation
df = pd.read_csv("data/titanic.csv")
df = df.loc[:9, ['Survived', 'Pclass', 'Sex', 'Age', 'Fare', 'Embarked']]
df
Explanation: <div class="alert alert-info">
**NOTE**:
A typical use case of `concat` is when you create (or read) multiple DataFrame with a similar structure in a loop, and then want to combine this list of DataFrames into a single DataFrame.
For example, assume you have a folder of similar CSV files (eg the data per day) you want to read and combine, this would look like:
```python
import pathlib
data_files = pathlib.Path("data_directory").glob("*.csv")
dfs = []
for path in data_files:
temp = pd.read_csv(path)
dfs.append(temp)
df = pd.concat(dfs)
```
<br>
Important: append to a list (not DataFrame), and concat this list at the end after the loop!
</div>
Joining data with pd.merge
Using pd.concat above, we combined datasets that had the same columns. But, another typical case is where you want to add information of a second dataframe to a first one based on one of the columns they have in common. That can be done with pd.merge.
Let's look again at the titanic passenger data, but taking a small subset of it to make the example easier to grasp:
End of explanation
locations = pd.DataFrame({'Embarked': ['S', 'C', 'N'],
'City': ['Southampton', 'Cherbourg', 'New York City'],
'Country': ['United Kindom', 'France', 'United States']})
locations
Explanation: Assume we have another dataframe with more information about the 'Embarked' locations:
End of explanation
pd.merge(df, locations, on='Embarked', how='left')
Explanation: We now want to add those columns to the titanic dataframe, for which we can use pd.merge, specifying the column on which we want to merge the two datasets:
End of explanation
import zipfile
with zipfile.ZipFile("data/TF_VAT_NACE_SQ_2019.zip", "r") as zip_ref:
zip_ref.extractall()
Explanation: In this case we use how='left (a "left join") because we wanted to keep the original rows of df and only add matching values from locations to it. Other options are 'inner', 'outer' and 'right' (see the docs for more on this, or this visualization: https://joins.spathon.com/).
Exercise with VAT numbers
For this exercise, we start from an open dataset on "Enterprises subject to VAT" (VAT = Value Added Tax), from https://statbel.fgov.be/en/open-data/enterprises-subject-vat-according-legal-form-11. For different regions and different enterprise types, it contains the number of enterprises subset to VAT ("MS_NUM_VAT"), and the number of such enterprises that started ("MS_NUM_VAT_START") or stopped ("MS_NUM_VAT_STOP") in 2019.
This file is provided as a zipped archive of a SQLite database file. Let's first unzip it:
End of explanation
import sqlite3
# connect with the database file
con = sqlite3.connect("TF_VAT_NACE_2019.sqlite")
# list the tables that are present in the database
con.execute("SELECT name FROM sqlite_master WHERE type='table';").fetchall()
Explanation: SQLite (https://www.sqlite.org/index.html) is a light-weight database engine, and a database can be stored as a single file. With the sqlite3 module of the Python standard library, we can open such a database and inspect it:
End of explanation
df = pd.read_sql("SELECT * FROM TF_VAT_NACE_2019", con)
df
Explanation: Pandas provides functionality to query data from a database. Let's fetch the main dataset contained in this file:
End of explanation
df_legal_forms = pd.read_sql("SELECT * FROM TD_LGL_PSN_VAT", con)
df_legal_forms
Explanation: More information about the identifyer variables (the first three columns) can be found in the other tables. For example, the "CD_LGL_PSN_VAT" column contains information about the legal form of the enterprise. What the values in this column mean, can be found in a different table:
End of explanation
# %load _solutions/pandas_09_combining_datasets1.py
Explanation: This type of data organization is called a "star schema" (https://en.wikipedia.org/wiki/Star_schema), and if we want to get the a "denormalized" version of the main dataset (all the data combined), we need to join the different tables.
<div class="alert alert-success">
**EXERCISE 1**:
Add the full name of the legal form (in the DataFrame `df_legal_forms`) to the main dataset (`df`). For this, join both datasets based on the "CD_LGL_PSN_VAT" column.
<details><summary>Hints</summary>
- `pd.merge` requires a left and a right DataFrame, the specification `on` to define the common index and the merge type `how`.
- Decide which type of merge is most appropriate: left, right, inner,...
</details>
</div>
End of explanation
# %load _solutions/pandas_09_combining_datasets2.py
Explanation: <div class="alert alert-success">
**EXERCISE 2**:
How many registered enterprises are there for each legal form? Sort the result from most to least occurring form.
<details><summary>Hints</summary>
- To count the number of registered enterprises, take the `sum` _for each_ (`groupby`) legal form.
- Check the `ascending` parameter of the `sort_values` function.
</details>
</div>
End of explanation
# %load _solutions/pandas_09_combining_datasets3.py
# %load _solutions/pandas_09_combining_datasets4.py
# %load _solutions/pandas_09_combining_datasets5.py
Explanation: <div class="alert alert-success">
**EXERCISE 3**:
How many enterprises are registered per province?
* Read in the "TD_MUNTY_REFNIS" table from the database file into a `df_muni` dataframe, which contains more information about the municipality (and the province in which the municipality is located).
* Merge the information about the province into the main `df` dataset.
* Using the joined dataframe, calculate the total number of registered companies per province.
<details><summary>Hints</summary>
- Data loading in Pandas requires `pd.read_...`, in this case `read_sql`. Do not forget the connection object as a second input.
- `df_muni` contains a lot of columns, whereas we are only interested in the province information. Only use the relevant columns "TX_PROV_DESCR_EN" and "CD_REFNIS" (you need this to join the data).
- Calculate the `sum` _for each_ (`groupby`) province.
</details>
</div>
End of explanation
import geopandas
import fiona
stat = geopandas.read_file("data/statbel_statistical_sectors_2019.shp.zip")
stat.head()
stat.plot()
Explanation: Joining with spatial data to make a map
The course materials contains a simplified version of the "statistical sectors" dataset (https://statbel.fgov.be/nl/open-data/statistische-sectoren-2019), with the borders of the municipalities. This dataset is provided as a zipped ESRI Shapefile, one of the often used file formats used in GIS for vector data.
The GeoPandas package extends pandas with geospatial functionality.
End of explanation
df_by_muni = df.groupby("CD_REFNIS").sum()
Explanation: The resulting dataframe (a GeoDataFrame) has a "geometry" column (in this case with polygons representing the borders of the municipalities), and a couple of new methods with geospatial functionality (for example, the plot() method by default makes a map). It is still a DataFrame, and everything we have learned about pandas can be used here as well.
Let's visualize the change in number of registered enterprises on a map at the municipality-level.
We first calculate the total number of (existing/starting/stopping) enterprises per municipality:
End of explanation
df_by_muni["NUM_VAT_CHANGE"] = (df_by_muni["MS_NUM_VAT_START"] - df_by_muni["MS_NUM_VAT_STOP"]) / df_by_muni["MS_NUM_VAT"] * 100
df_by_muni
Explanation: And add a new column with the relative change in the number of registered enterprises:
End of explanation
joined = pd.merge(stat, df_by_muni, left_on="CNIS5_2019", right_on="CD_REFNIS")
joined
Explanation: We can now merge the dataframe with the geospatial information of the municipalities with the dataframe with the enterprise numbers:
End of explanation
joined["NUM_VAT_CHANGE_CAT"] = pd.cut(joined["NUM_VAT_CHANGE"], [-15, -6, -4, -2, 2, 4, 6, 15])
joined.plot(column="NUM_VAT_CHANGE_CAT", figsize=(10, 10), cmap="coolwarm", legend=True)#k=7, scheme="equal_interval")
Explanation: With this joined dataframe, we can make a new map, now visualizing the change in number of registered enterprises ("NUM_VAT_CHANGE"):
End of explanation
data = {'country': ['Belgium', 'France', 'Germany', 'Netherlands', 'United Kingdom'],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, 244820],
'capital': ['Brussels', 'Paris', 'Berlin', 'Amsterdam', 'London']}
countries = pd.DataFrame(data)
countries
data = {'country': ['Belgium', 'France', 'Netherlands'],
'GDP': [496477, 2650823, 820726],
'area': [8.0, 9.9, 5.7]}
country_economics = pd.DataFrame(data).set_index('country')
country_economics
pd.concat([countries, country_economics], axis=1)
Explanation: Combining columns - pd.concat with axis=1
We can use pd.merge to combine the columns of two DataFrame based on a common column. If our two DataFrames already have equivalent rows, we can also achieve this basic case using pd.concat with specifying axis=1 (or axis="columns").
Assume we have another DataFrame for the same countries, but with some additional statistics:
End of explanation
countries2 = countries.set_index('country')
countries2
pd.concat([countries2, country_economics], axis="columns")
Explanation: pd.concat matches the different objects based on the index:
End of explanation |
11,875 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Example - FRET histogram fitting
This notebook is part of smFRET burst analysis software FRETBursts.
In this notebook shows how to fit a FRET histogram.
For a complete tutorial on burst analysis see
FRETBursts - us-ALEX smFRET burst analysis.
Step1: Get and process data
Step2: Fitting the FRET histogram
We start defining the model. Here we choose a 3-Gaussian model
Step3: The previsou cell prints all the model parameters.
Each parameters has an initial value and bounds (min, max).
The column vary tells if a parameter is varied during the fit
(if False the parameter is fixed).
Parameters with an expression (Expr column) are not free but
the are computed as a function of other parameters.
We can modify the paramenters constrains as follows
Step4: Then, we fit and plot the model
Step5: The results are in E_fitter
Step6: To get a dictionary of values
Step7: This is the startndard lmfit's fit report
Step8: The previous cell reports error ranges computed from the covariance matrix.
More accurare confidence intervals
can be obtained with
Step9: Tidy fit results
It is convenient to put the fit results in a DataFrame for further analysis.
A dataframe of fitted parameters is already in E_fitter
Step10: With pybroom we can get a "tidy" DataFrame
with more complete fit results
Step11: Now, for example, we can easily select parameters by name | Python Code:
from fretbursts import *
sns = init_notebook(apionly=True)
import lmfit
print('lmfit version:', lmfit.__version__)
# Tweak here matplotlib style
import matplotlib as mpl
mpl.rcParams['font.sans-serif'].insert(0, 'Arial')
mpl.rcParams['font.size'] = 12
%config InlineBackend.figure_format = 'retina'
Explanation: Example - FRET histogram fitting
This notebook is part of smFRET burst analysis software FRETBursts.
In this notebook shows how to fit a FRET histogram.
For a complete tutorial on burst analysis see
FRETBursts - us-ALEX smFRET burst analysis.
End of explanation
url = 'http://files.figshare.com/2182601/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
download_file(url, save_dir='./data')
full_fname = "./data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5"
d = loader.photon_hdf5(full_fname)
loader.alex_apply_period(d)
d.calc_bg(bg.exp_fit, time_s=1000, tail_min_us=(800, 4000, 1500, 1000, 3000))
d.burst_search(L=10, m=10, F=6)
ds = d.select_bursts(select_bursts.size, add_naa=True, th1=30)
Explanation: Get and process data
End of explanation
model = mfit.factory_three_gaussians()
model.print_param_hints()
Explanation: Fitting the FRET histogram
We start defining the model. Here we choose a 3-Gaussian model:
End of explanation
model.set_param_hint('p1_center', value=0.1, min=-0.1, max=0.3)
model.set_param_hint('p2_center', value=0.4, min=0.3, max=0.7)
model.set_param_hint('p2_sigma', value=0.04, min=0.02, max=0.18)
model.set_param_hint('p3_center', value=0.85, min=0.7, max=1.1)
Explanation: The previsou cell prints all the model parameters.
Each parameters has an initial value and bounds (min, max).
The column vary tells if a parameter is varied during the fit
(if False the parameter is fixed).
Parameters with an expression (Expr column) are not free but
the are computed as a function of other parameters.
We can modify the paramenters constrains as follows:
End of explanation
E_fitter = bext.bursts_fitter(ds, 'E', binwidth=0.03)
E_fitter.fit_histogram(model=model, pdf=False, method='nelder')
E_fitter.fit_histogram(model=model, pdf=False, method='leastsq')
dplot(ds, hist_fret, show_model=True, pdf=False);
# dplot(ds, hist_fret, show_model=True, pdf=False, figsize=(6, 4.5));
# plt.xlim(-0.1, 1.1)
# plt.savefig('fret_hist_fit.png', bbox_inches='tight', dpi=200, transparent=False)
Explanation: Then, we fit and plot the model:
End of explanation
res = E_fitter.fit_res[0]
res.params.pretty_print()
Explanation: The results are in E_fitter:
End of explanation
res.values
Explanation: To get a dictionary of values:
End of explanation
print(res.fit_report(min_correl=0.5))
Explanation: This is the startndard lmfit's fit report:
End of explanation
ci = res.conf_interval()
lmfit.report_ci(ci)
Explanation: The previous cell reports error ranges computed from the covariance matrix.
More accurare confidence intervals
can be obtained with:
End of explanation
E_fitter.params
Explanation: Tidy fit results
It is convenient to put the fit results in a DataFrame for further analysis.
A dataframe of fitted parameters is already in E_fitter:
End of explanation
import pybroom as br
df = br.tidy(res)
df
Explanation: With pybroom we can get a "tidy" DataFrame
with more complete fit results:
End of explanation
df.loc[df.name.str.contains('center')]
df.loc[df.name.str.contains('sigma')]
Explanation: Now, for example, we can easily select parameters by name:
End of explanation |
11,876 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Using Tensorboard in DeepChem
DeepChem Neural Networks models are built on top of tensorflow. Tensorboard is a powerful visualization tool in tensorflow for viewing your model architecture and performance.
In this tutorial we will show how to turn on tensorboard logging for our models, and go show the network architecture for some of our more popular models.
The first thing we have to do is load a dataset that we will monitor model performance over.
Step1: Now we will create our model with tensorboard on. All we have to do to turn tensorboard on is pass the tensorboard=True flag to the constructor of our model
Step2: Viewing the Tensorboard output
When tensorboard is turned on we log all the files needed for tensorboard in model.model_dir. To launch the tensorboard webserver we have to call in a terminal
bash
tensorboard --logdir=model.model_dir
This will launch the tensorboard web server on your local computer on port 6006. Go to http
Step3: If you click "GRAPHS" at the top you can see a visual layout of the model. Here is what our GraphConvModel Model looks like | Python Code:
from IPython.display import Image, display
import deepchem as dc
from deepchem.molnet import load_tox21
from deepchem.models.tensorgraph.models.graph_models import GraphConvModel
# Load Tox21 dataset
tox21_tasks, tox21_datasets, transformers = load_tox21(featurizer='GraphConv')
train_dataset, valid_dataset, test_dataset = tox21_datasets
Explanation: Using Tensorboard in DeepChem
DeepChem Neural Networks models are built on top of tensorflow. Tensorboard is a powerful visualization tool in tensorflow for viewing your model architecture and performance.
In this tutorial we will show how to turn on tensorboard logging for our models, and go show the network architecture for some of our more popular models.
The first thing we have to do is load a dataset that we will monitor model performance over.
End of explanation
# Construct the model with tensorbaord on
model = GraphConvModel(len(tox21_tasks), mode='classification', tensorboard=True)
# Fit the model
model.fit(train_dataset, nb_epoch=10)
Explanation: Now we will create our model with tensorboard on. All we have to do to turn tensorboard on is pass the tensorboard=True flag to the constructor of our model
End of explanation
display(Image(filename='assets/tensorboard_landing.png'))
Explanation: Viewing the Tensorboard output
When tensorboard is turned on we log all the files needed for tensorboard in model.model_dir. To launch the tensorboard webserver we have to call in a terminal
bash
tensorboard --logdir=model.model_dir
This will launch the tensorboard web server on your local computer on port 6006. Go to http://localhost:6006 in your web browser to look through tensorboard's UI.
The first thing you will see is a graph of the loss vs mini-batches. You can use this data to determine if your model is still improving it's loss function over time or to find out if your gradients are exploding!.
End of explanation
display(Image(filename='assets/GraphConvArch.png'))
Explanation: If you click "GRAPHS" at the top you can see a visual layout of the model. Here is what our GraphConvModel Model looks like
End of explanation |
11,877 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
网络科学理论
网络科学简介
王成军
[email protected]
计算传播网 http
Step1: Directed
Links
Step2: <img src = './img/networks.png' width = 1000>
Degree, Average Degree and Degree Distribution
Step3: Undirected network
Step4: Directed network
In directed networks we can define an in-degree and out-degree. The (total) degree is the sum of in-and out-degree.
$k_3^{in} = 2, k_3^{out} = 1, k_3 = 3$
Source
Step5: For a sample of N values
Step6: Average Degree
Undirected
$<k> = \frac{1}{N} \sum_{i = 1}^{N} k_i = \frac{2L}{N}$
Directed
$<k^{in}> = \frac{1}{N} \sum_{i=1}^N k_i^{in}= <k^{out}> = \frac{1}{N} \sum_{i=1}^N k_i^{out} = \frac{L}{N}$
Degree distribution
P(k)
Step7: Undirected
$A_{ij} =1$ if there is a link between node i and j
$A_{ij} =0$ if there is no link between node i and j
$A_{ij}=\begin{bmatrix} 0&1 &0 &1 \ 1&0 &0 &1 \ 0 &0 &0 &1 \ 1&1 &1 & 0 \end{bmatrix}$
Undirected
无向网络的矩阵是对称的。
$A_{ij} = A_{ji} , \
Step8: <img src = './img/pagerank.png' width = 400>
Step9: <img src = './img/pagerank_trap.png' width = 400>
Step10: Ingredient-Flavor Bipartite Network
<img src = './img/bipartite.png' width = 800>
Path 路径
A path is a sequence of nodes in which each node is adjacent to the next one
- In a directed network, the path can follow only the direction of an arrow.
Distance 距离
The distance (shortest path, geodesic path) between two nodes is defined as the number of edges along the shortest path connecting them.
If the two nodes are disconnected, the distance is infinity.
Diameter 直径
Diameter $d_{max}$ is the maximum distance between any pair of nodes in the graph.
Shortest Path 最短路径
The path with the shortest length between two nodes (distance).
Average path length/distance, $<d>$ 平均路径长度
The average of the shortest paths for all pairs of nodes.
for a directed graph | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import networkx as nx
Gu = nx.Graph()
for i, j in [(1, 2), (1, 4), (4, 2), (4, 3)]:
Gu.add_edge(i,j)
nx.draw(Gu, with_labels = True)
Explanation: 网络科学理论
网络科学简介
王成军
[email protected]
计算传播网 http://computational-communication.com
FROM SADDAM HUSSEIN TO NETWORK THEORY
A SIMPLE STORY (1) The fate of Saddam and network science
SADDAM HUSSEIN: the fifth President of Iraq, serving in this capacity from 16 July 1979 until 9 April 2003
Invasion that started in March 19, 2003. Many of the regime's high ranking officials, including Saddam Hussein, avoided capture.
Hussein was last spotted kissing a baby in Baghdad in April 2003, and then his trace went cold.
Designed a deck of cards, each card engraved with the images of the 55 most wanted.
It worked: by May 1, 2003, 15 men on the cards were captured, and by the end of the month another 12 were under custody.
Yet, the ace of spades, i.e. Hussein himself, remained at large.
<img src = './img/saddam.png' width = 500>
The capture of Saddam Hussein
shows the strong predictive power of networks.
underlies the need to obtain accurate maps of the networks we aim to study;
and the often heroic difficulties of the mapping process.
demonstrates the remarkable stability of these networks
The capture of Hussein was not based on fresh intelligence
but rather on his pre-invasion social links, unearthed from old photos stacked in his family album.
shows that the choice of network we focus on makes a huge difference:
the hierarchical tree captured the official organization of the Iraqi government,
was of no use when it came to Saddam Hussein's whereabouts.
How about Osama bin Laden?
the founder of al-Qaeda, the organization that claimed responsibility for the September 11 attacks on the United States.
2005年9月1日,中情局内部关于猎杀本·拉登任务的布告栏上贴出了如下信息:由于关押囚犯的强化刑讯已经没有任何意义,“我们只能继续跟踪科威特”。中情局自此开始了对科威特长达数年的跟踪,最终成功窃听到了他本·拉登之间的移动电话,从确定了他的位置并顺藤摸瓜找到了本·拉登在巴基斯坦的豪宅,再经过9个月的证实、部署,于2011年5月1日由海豹突击队发动突袭、击毙本·拉登。
A SIMPLE STORY (2): August 15, 2003 blackout.
<img src='./img/blackout.png' width = 800>
VULNERABILITY
DUE TO INTERCONNECTIVITY
The 2003 blackout is a typical example of a cascading failure.
1997, when the International Monetary Fund pressured the central banks of several Pacific nations to limit their credit.
2009-2011 financial melt-down
An important theme of this class:
we must understand how network structure affects the robustness of a complex system.
develop quantitative tools to assess the interplay between network structure and the dynamical processes on the networks, and their impact on failures.
We will learn that failures reality failures follow reproducible laws, that can be quantified and even predicted using the tools of network science.
NETWORKS AT THE HEART OF
COMPLEX SYSTEMS
Complex
[adj., v. kuh m-pleks, kom-pleks; n. kom-pleks]
–adjective
- composed of many interconnected parts; compound; composite: a complex highway system.
- characterized by a very complicated or involved arrangement of parts, units, etc.: complex machinery.
- so complicated or intricate as to be hard to understand or deal with: a complex problem.
Source: Dictionary.com
Complexity
a scientific theory which asserts that some systems display behavioral phenomena that are completely inexplicable by any conventional analysis of the systems’ constituent parts. These phenomena, commonly referred to as emergent behaviour, seem to occur in many complex systems involving living organisms, such as a stock market or the human brain.
Source: John L. Casti, Encyclopædia Britannica
COMPLEX SYSTEMS
society
brain
market
cell
Stephen Hawking: I think the next century will be the century of complexity.
Behind each complex system there is a network, that defines the interactions between the component.
<img src = './img/facebook.png' width = 800>
Social graph
Organization
Brain
finantial network
business
Internet
Genes
Behind each system studied in complexity there is an intricate wiring diagram, or a network, that defines the interactions between the component.
We will never understand complex system unless we map out and understand the networks behind them.
TWO FORCES HELPED THE EMERGENCE OF NETWORK SCIENCE
THE HISTORY OF NETWORK ANALYSIS
Graph theory: 1735, Euler
Social Network Research: 1930s, Moreno
Communication networks/internet: 1960s
Ecological Networks: May, 1979.
While the study of networks has a long history from graph theory to sociology, the modern chapter of network science emerged only during the first decade of the 21st century, following the publication of two seminal papers in 1998 and 1999.
The explosive interest in network science is well documented by the citation pattern of two classic network papers, the 1959 paper by Paul Erdos and Alfréd Rényi that marks the beginning of the study of random networks in graph theory [4] and the 1973 paper by Mark Granovetter, the most cited social network paper [5].
Both papers were hardly or only moderately cited before 2000. The explosive growth of citations to these papers in the 21st century documents the emergence of network science, drawing a new, interdisciplinary audience to these classic publications.
<img src = './img/citation.png' width = 500>
THE EMERGENCE OF NETWORK SCIENCE
Movie Actor Network, 1998;
World Wide Web, 1999.
C elegans neural wiring diagram 1990
Citation Network, 1998
Metabolic Network, 2000;
PPI network, 2001
The universality of network characteristics:
The architecture of networks emerging in various domains of science, nature, and technology are more similar to each other than one would have expected.
THE CHARACTERISTICS OF NETWORK SCIENCE
Interdisciplinary
Empirical
Quantitative and Mathematical
Computational
THE IMPACT OF NETWORK SCIENCE
Google
Market Cap(2010 Jan 1):
$189 billion
Cisco Systems
networking gear Market cap (Jan 1, 2919):
$112 billion
Facebook
market cap:
$50 billion
Health: From drug design to metabolic engineering.
The human genome project, completed in 2001, offered the first comprehensive list of all human genes.
Yet, to fully understand how our cells function, and the origin of disease,
we need accurate maps that tell us how these genes and other cellular components interact with each other.
Security: Fighting Terrorism.
Terrorism is one of the maladies of the 21st century, absorbing significant resources to combat it worldwide.
Network thinking is increasingly present in the arsenal of various law enforcement agencies in charge of limiting terrorist activities.
To disrupt the financial network of terrorist organizations
to map terrorist networks
to uncover the role of their members and their capabilities.
Using social networks to capture Saddam Hussein
Capturing of the individuals behind the March 11, 2004 Madrid train bombings through the examination of the mobile call network.
Epidemics: From forecasting to halting deadly viruses.
While the H1N1 pandemic was not as devastating as it was feared at the beginning of the outbreak in 2009, it gained a special role in the history of epidemics: it was the first pandemic whose course and time evolution was accurately predicted months before the pandemic reached its peak.
Before 2000 epidemic modeling was dominated by compartment models, assuming that everyone can infect everyone else one word the same socio-physical compartment.
The emergence of a network-based framework has fundamentally changed this, offering a new level of predictability in epidemic phenomena.
In January 2010 network science tools have predicted the conditions necessary for the emergence of viruses spreading through mobile phones.
The first major mobile epidemic outbreak
in the fall of 2010 in China, infecting over 300,000 phones each day, closely followed the predicted scenario.
Brain Research: Mapping neural network.
The human brain, consisting of hundreds of billions of interlinked neurons, is one of the least understood networks from the perspective of network science.
The reason is simple:
- we lack maps telling us which neurons link to each other.
- The only fully mapped neural map available for research is that of the C.Elegans worm, with only 300 neurons.
Driven by the potential impact of such maps, in 2010 the National Institutes of Health has initiated the Connectome project, aimed at developing the technologies that could provide an accurate neuron-level map of mammalian brains.
The Bridges of Konigsberg
<img src = './img/konigsberg.png' width = 500>
Can one walk across the seven bridges and never cross the same bridge twice and get back to the starting place?
Can one walk across the seven bridges and never cross the same bridge twice and get back to the starting place?
<img src ='./img/euler.png' width = 300>
Euler’s theorem (1735):
If a graph has more than two nodes of odd degree, there is no path.
If a graph is connected and has no odd degree nodes, it has at least one path.
COMPONENTS OF A COMPLEX SYSTEM
Networks and graphs
components: nodes, vertices N
interactions: links, edges L
system: network, graph (N,L)
network often refers to real systems
- www,
- social network
- metabolic network.
Language: (Network, node, link)
graph: mathematical representation of a network
- web graph,
- social graph (a Facebook term)
Language: (Graph, vertex, edge)
G(N, L)
<img src = './img/net.png' width = 800>
CHOOSING A PROPER REPRESENTATION
The choice of the proper network representation determines our ability to use network theory successfully.
In some cases there is a unique, unambiguous representation.
In other cases, the representation is by no means unique.
For example, the way we assign the links between a group of individuals will determine the nature of the question we can study.
If you connect individuals that work with each other, you will explore the professional network.
http://www.theyrule.net
If you connect those that have a romantic and sexual relationship, you will be exploring the sexual networks.
If you connect individuals based on their first name (all Peters connected to each other), you will be exploring what?
It is a network, nevertheless.
UNDIRECTED VS. DIRECTED NETWORKS
Undirected
Links: undirected
- co-authorship
- actor network
- protein interactions
End of explanation
import networkx as nx
Gd = nx.DiGraph()
for i, j in [(1, 2), (1, 4), (4, 2), (4, 3)]:
Gd.add_edge(i,j)
nx.draw(Gd, with_labels = True, pos=nx.circular_layout(Gd))
Explanation: Directed
Links: directed
- urls on the www
- phone calls
- metabolic reactions
End of explanation
nx.draw(Gu, with_labels = True)
Explanation: <img src = './img/networks.png' width = 1000>
Degree, Average Degree and Degree Distribution
End of explanation
nx.draw(Gd, with_labels = True, pos=nx.circular_layout(Gd))
Explanation: Undirected network:
Node degree: the number of links connected to the node.
$k_1 = k_2 = 2, k_3 = 3, k_4 = 1$
End of explanation
import numpy as np
x = [1, 1, 1, 2, 2, 3]
np.mean(x), np.sum(x), np.std(x)
Explanation: Directed network
In directed networks we can define an in-degree and out-degree. The (total) degree is the sum of in-and out-degree.
$k_3^{in} = 2, k_3^{out} = 1, k_3 = 3$
Source: a node with $k^{in}= 0$; Sink: a node with $k^{out}= 0$.
For a sample of N values: $x_1, x_2, ..., x_N$:
Average(mean):
$<x> = \frac{x_1 +x_2 + ...+x_N}{N} = \frac{1}{N}\sum_{i = 1}^{N} x_i$
For a sample of N values: $x_1, x_2, ..., x_N$:
The nth moment:
$<x^n> = \frac{x_1^n +x_2^n + ...+x_N^n}{N} = \frac{1}{N}\sum_{i = 1}^{N} x_i^n$
For a sample of N values: $x_1, x_2, ..., x_N$:
Standard deviation:
$\sigma_x = \sqrt{\frac{1}{N}\sum_{i = 1}^{N} (x_i - <x>)^2}$
End of explanation
# 直方图
plt.hist(x)
plt.show()
from collections import defaultdict, Counter
freq = defaultdict(int)
for i in x:
freq[i] +=1
freq
freq_sum = np.sum(freq.values())
freq_sum
px = [float(i)/freq_sum for i in freq.values()]
px
plt.plot(freq.keys(), px, 'r-o')
plt.show()
Explanation: For a sample of N values: $x_1, x_2, ..., x_N$:
Distribution of x:
$p_x = \frac{The \: frequency \: of \: x}{The\: Number \:of\: Observations}$
其中,$p_x 满足 \sum_i p_x = 1$
End of explanation
plt.figure(1)
plt.subplot(121)
pos = nx.nx.circular_layout(Gu) #定义一个布局,此处采用了spring布局方式
nx.draw(Gu, pos, with_labels = True)
plt.subplot(122)
nx.draw(Gd, pos, with_labels = True)
Explanation: Average Degree
Undirected
$<k> = \frac{1}{N} \sum_{i = 1}^{N} k_i = \frac{2L}{N}$
Directed
$<k^{in}> = \frac{1}{N} \sum_{i=1}^N k_i^{in}= <k^{out}> = \frac{1}{N} \sum_{i=1}^N k_i^{out} = \frac{L}{N}$
Degree distribution
P(k): probability that a randomly selected node has degree k
$N_k = The \:number\: of \:nodes\:with \:degree\: k$
$P(k) = \frac{N_k}{N}$
Adjacency matrix
$A_{ij} =1$ if there is a link between node i and j
$A_{ij} =0$ if there is no link between node i and j
End of explanation
import numpy as np
edges = [('甲', '新辣道'), ('甲', '海底捞'), ('甲', '五方院'),
('乙', '海底捞'), ('乙', '麦当劳'), ('乙', '俏江南'),
('丙', '新辣道'), ('丙', '海底捞'),
('丁', '新辣道'), ('丁', '五方院'), ('丁', '俏江南')]
h_dic = {i:1 for i,j in edges}
for k in range(5):
print(k, 'steps')
a_dic = {j:0 for i, j in edges}
for i,j in edges:
a_dic[j]+=h_dic[i]
print(a_dic)
h_dic = {i:0 for i, j in edges}
for i, j in edges:
h_dic[i]+=a_dic[j]
print(h_dic)
def norm_dic(dic):
sumd = np.sum(list(dic.values()))
return {i : dic[i]/sumd for i in dic}
h = {i for i, j in edges}
h_dic = {i:1/len(h) for i in h}
for k in range(100):
a_dic = {j:0 for i, j in edges}
for i,j in edges:
a_dic[j]+=h_dic[i]
a_dic = norm_dic(a_dic)
h_dic = {i:0 for i, j in edges}
for i, j in edges:
h_dic[i]+=a_dic[j]
h_dic = norm_dic(h_dic)
print(a_dic)
B = nx.Graph()
users, items = {i for i, j in edges}, {j for i, j in edges}
for i, j in edges:
B.add_edge(i,j)
h, a = nx.hits(B)
print({i:a[i] for i in items} )
# {j:h[j] for j in users}
Explanation: Undirected
$A_{ij} =1$ if there is a link between node i and j
$A_{ij} =0$ if there is no link between node i and j
$A_{ij}=\begin{bmatrix} 0&1 &0 &1 \ 1&0 &0 &1 \ 0 &0 &0 &1 \ 1&1 &1 & 0 \end{bmatrix}$
Undirected
无向网络的矩阵是对称的。
$A_{ij} = A_{ji} , \: A_{ii} = 0$
$k_i = \sum_{j=1}^N A_{ij}, \: k_j = \sum_{i=1}^N A_{ij} $
网络中的链接数量$L$可以表达为:
$ L = \frac{1}{2}\sum_{i=1}^N k_i = \frac{1}{2}\sum_{ij}^N A_{ij} $
Directed
$A_{ij} =1$ if there is a link between node i and j
$A_{ij} =0$ if there is no link between node i and j
$A_{ij}=\begin{bmatrix} 0&0 &0 &0 \ 1&0 &0 &1 \ 0 &0 &0 &1 \ 1&0 &0 & 0 \end{bmatrix}$
Note that for a directed graph the matrix is not symmetric.
Directed
$A_{ij} \neq A_{ji}, \: A_{ii} = 0$
$k_i^{in} = \sum_{j=1}^N A_{ji}, \: k_j^{out} = \sum_{i=1}^N A_{ij} $
$ L = \sum_{i=1}^N k_i^{in} = \sum_{j=1}^N k_j^{out}= \frac{1}{2}\sum_{i,j}^N A_{ij} $
WEIGHTED AND UNWEIGHTED NETWORKS
$A_{ij} = W_{ij}$
BIPARTITE NETWORKS
bipartite graph (or bigraph) is a graph whose nodes can be divided into two disjoint sets U and V such that every link connects a node in U to one in V; that is, U and V are independent sets.
Hits algorithm
recommendation system
<img src = './img/hits2.png' width = 400>
End of explanation
import networkx as nx
Gp = nx.DiGraph()
edges = [('a', 'b'), ('a', 'c'), ('b', 'd'), ('b', 'e'), ('c', 'f'), ('c', 'g'),
('d', 'h'), ('d', 'a'), ('e', 'a'), ('e', 'h'), ('f', 'a'), ('g', 'a'), ('h', 'a')]
for i, j in edges:
Gp.add_edge(i,j)
nx.draw(Gp, with_labels = True, font_size = 25, font_color = 'blue', alpha = 0.5,
pos = nx.kamada_kawai_layout(Gp))
#pos=nx.spring_layout(Gp, iterations = 5000))
steps = 11
n = 8
a, b, c, d, e, f, g, h = [[1.0/n for i in range(steps)] for j in range(n)]
for i in range(steps-1):
a[i+1] = 0.5*d[i] + 0.5*e[i] + h[i] + f[i] + g[i]
b[i+1] = 0.5*a[i]
c[i+1] = 0.5*a[i]
d[i+1] = 0.5*b[i]
e[i+1] = 0.5*b[i]
f[i+1] = 0.5*c[i]
g[i+1] = 0.5*c[i]
h[i+1] = 0.5*d[i] + 0.5*e[i]
print(i+1,':', a[i+1], b[i+1], c[i+1], d[i+1], e[i+1], f[i+1], g[i+1], h[i+1])
Explanation: <img src = './img/pagerank.png' width = 400>
End of explanation
G = nx.DiGraph(nx.path_graph(10))
pr = nx.pagerank(G, alpha=0.9)
pr
Explanation: <img src = './img/pagerank_trap.png' width = 400>
End of explanation
G1 = nx.complete_graph(4)
pos = nx.spring_layout(G1) #定义一个布局,此处采用了spring布局方式
nx.draw(G1, pos = pos, with_labels = True)
print(nx.transitivity(G1))
G2 = nx.Graph()
for i, j in [(1, 2), (1, 3), (1, 0), (3, 0)]:
G2.add_edge(i,j)
nx.draw(G2,pos = pos, with_labels = True)
print(nx.transitivity(G2))
# 开放三元组有5个,闭合三元组有3个
G3 = nx.Graph()
for i, j in [(1, 2), (1, 3), (1, 0)]:
G3.add_edge(i,j)
nx.draw(G3, pos =pos, with_labels = True)
print(nx.transitivity(G3))
# 开放三元组有3个,闭合三元组有0个
Explanation: Ingredient-Flavor Bipartite Network
<img src = './img/bipartite.png' width = 800>
Path 路径
A path is a sequence of nodes in which each node is adjacent to the next one
- In a directed network, the path can follow only the direction of an arrow.
Distance 距离
The distance (shortest path, geodesic path) between two nodes is defined as the number of edges along the shortest path connecting them.
If the two nodes are disconnected, the distance is infinity.
Diameter 直径
Diameter $d_{max}$ is the maximum distance between any pair of nodes in the graph.
Shortest Path 最短路径
The path with the shortest length between two nodes (distance).
Average path length/distance, $<d>$ 平均路径长度
The average of the shortest paths for all pairs of nodes.
for a directed graph: where $d_{ij}$ is the distance from node i to node j
$<d> = \frac{1}{2 L }\sum_{i, j \neq i} d_{ij}$
有向网络当中的$d_{ij}$数量是链接数量L的2倍
In an undirected graph $d_{ij} =d_{ji}$ , so we only need to count them once
无向网络当中的$d_{ij}$数量是链接数量L
$<d> = \frac{1}{L }\sum_{i, j > i} d_{ij}$
Cycle 环
A path with the same start and end node.
CONNECTEDNESS
Connected (undirected) graph
In a connected undirected graph, any two vertices can be joined by a path.
A disconnected graph is made up by two or more connected components.
Largest Component: Giant Component
The rest: Isolates
Bridge 桥
if we erase it, the graph becomes disconnected.
The adjacency matrix of a network with several components can be written in a block-diagonal form, so that nonzero elements are confined to squares, with all other elements being zero:
<img src = './img/block.png' width = 600>
结构洞
洞在哪里?
洞在桥下!
结构“坑”
Strongly connected directed graph 强连通有向图
has a path from each node to every other node and vice versa (e.g. AB path and BA path).
Weakly connected directed graph 弱连接有向图
it is connected if we disregard the edge directions.
Strongly connected components can be identified, but not every node is part of a nontrivial strongly connected component.
In-component -> SCC ->Out-component
In-component: nodes that can reach the scc (strongly connected component 强连通分量或强连通子图)
Out-component: nodes that can be reached from the scc.
万维网的蝴蝶结模型🎀 bowtie model
Clustering coefficient 聚集系数
Clustering coefficient 聚集系数
what fraction of your neighbors are connected? Watts & Strogatz, Nature 1998.
节点$i$的朋友之间是否也是朋友?
Node i with degree $k_i$ 节点i有k个朋友
$e_i$ represents the number of links between the $k_i$ neighbors of node i.
节点i的k个朋友之间全部是朋友的数量 $\frac{k_i(k_i -1)}{2}$
$C_i = \frac{2e_i}{k_i(k_i -1)}$
$C_i$ in [0,1]
节点的聚集系数
<img src = './img/cc.png' width = 500>
Global Clustering Coefficient 全局聚集系数(i.e., Transtivity 传递性)
triangles 三角形
triplets 三元组
A triplet consists of three connected nodes.
A triangle therefore includes three closed triplets
A triangle forms three connected triplets
A connected triplet is defined to be a connected subgraph consisting of three vertices and two edges.
$C = \frac{\mbox{number of closed triplets}}{\mbox{number of connected triplets of vertices}}$
$C = \frac{3 \times \mbox{number of triangles}}{\mbox{number of connected triplets of vertices}}$
End of explanation |
11,878 | Given the following text problem statement, write Python code to implement the functionality described below in problem statement
Problem:
I have been struggling with removing the time zone info from a column in a pandas dataframe. I have checked the following question, but it does not work for me: | Problem:
import pandas as pd
df = pd.DataFrame({'datetime': ['2015-12-01 00:00:00-06:00', '2015-12-02 00:01:00-06:00', '2015-12-03 00:00:00-06:00']})
df['datetime'] = pd.to_datetime(df['datetime'])
df['datetime'] = df['datetime'].dt.tz_localize(None)
df.sort_values(by='datetime', inplace=True)
df['datetime'] = df['datetime'].dt.strftime('%d-%b-%Y %T') |
11,879 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Vertex AI Model Monitoring with Explainable AI Feature Attributions
<table align="left">
<td>
<a href="https
Step1: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
If you are running this notebook locally, you will need to install the Cloud SDK.
You'll use the gcloud command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your gcloud configuration settings.
For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.
Step2: Login to your Google Cloud account and enable AI services
Step3: Define some helper functions and data structures
Run the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
Step4: Generate model metadata for explainable AI
Run the following cell to extract metadata from the exported model, which is needed for generating the prediction explanations.
Step5: Import your model
The churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another.
Run the next cell to import this model into your project. If you've already imported your model, you can skip this step.
Step6: This request will return immediately but it spawns an asynchronous task that takes several minutes. Periodically check the Vertex Models page on the Cloud Console and don't continue with this lab until you see your newly created model there. It should like something like this
Step7: Run a prediction test
Now that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON, along with a pie chart summarizing the results.
Try this now by running the next cell and examine the results.
Step8: Taking a closer look at the results, we see the following elements
Step9: Start your monitoring job
Now that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.
In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML.
Configure the following fields
Step10: Create your monitoring job
The following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
Step11: After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like
Step12: You will notice the following components in these Cloud Storage paths
Step13: Interpret your results
While waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience.
Here's what a sample email alert looks like...
<img src="https | Python Code:
import os
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# Google Cloud Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_GOOGLE_CLOUD_NOTEBOOK:
USER_FLAG = "--user"
import os
import pprint as pp
import sys
import IPython
assert sys.version_info.major == 3, "This notebook requires Python 3."
# Install Python package dependencies.
print("Installing TensorFlow and TensorFlow Data Validation (TFDV)")
! pip3 install {USER_FLAG} --quiet --upgrade tensorflow tensorflow_data_validation[visualization]
! rm -f /opt/conda/lib/python3.7/site-packages/tensorflow/core/kernels/libtfkernel_sobol_op.so
! pip3 install {USER_FLAG} --quiet --upgrade google-api-python-client google-auth-oauthlib google-auth-httplib2 oauth2client requests
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-aiplatform
! pip3 install {USER_FLAG} --quiet --upgrade explainable_ai_sdk
! pip3 install {USER_FLAG} --quiet --upgrade google-cloud-storage==1.32.0
# Automatically restart kernel after installing new packages.
if not os.getenv("IS_TESTING"):
print("Restarting kernel...")
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
print("Done.")
# Import required packages.
import os
import random
import sys
import time
import matplotlib.pyplot as plt
import numpy as np
Explanation: Vertex AI Model Monitoring with Explainable AI Feature Attributions
<table align="left">
<td>
<a href="https://console.cloud.google.com/ai-platform/notebooks/deploy-notebook?name=Model%20Monitoring&download_url=https%3A%2F%2Fraw.githubusercontent.com%2FGoogleCloudPlatform%2Fvertex-ai-samples%2Fmaster%2Fnotebooks%2Fcommunity%2Fmodel_monitoring%2Fmodel_monitoring_feature_attribs.ipynb">
<img src="https://www.gstatic.com/cloud/images/navigation/vertex-ai.svg" alt="Google Cloud Notebooks">Open in Cloud Notebook
</a>
</td>
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/model_monitoring/model_monitoring_feature_attribs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Open in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/model_monitoring/model_monitoring_feature_attribs.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
What is Vertex AI Model Monitoring?
Modern applications rely on a well established set of capabilities to monitor the health of their services. Examples include:
software versioning
rigorous deployment processes
event logging
alerting/notication of situations requiring intervention
on-demand and automated diagnostic tracing
automated performance and functional testing
You should be able to manage your ML services with the same degree of power and flexibility with which you can manage your applications. That's what MLOps is all about - managing ML services with the best practices Google and the broader computing industry have learned from generations of experience deploying well engineered, reliable, and scalable services.
Model monitoring is only one piece of the ML Ops puzzle - it helps answer the following questions:
How well do recent service requests match the training data used to build your model? This is called training-serving skew.
How significantly are service requests evolving over time? This is called drift detection.
Vertex Explainable AI adds another facet to model monitoring, which we call feature attribution monitoring. Explainable AI enables you to understand the relative contribution of each feature to a resulting prediction. In essence, it assesses the magnitude of each feature's influence.
If production traffic differs from training data, or varies substantially over time, either in terms of model predictions or feature attributions, that's likely to impact the quality of the answers your model produces. When that happens, you'd like to be alerted automatically and responsively, so that you can anticipate problems before they affect your customer experiences or your revenue streams.
Objective
In this notebook, you will learn how to...
deploy a pre-trained model
configure model monitoring
generate some artificial traffic
understand how to interpret the statistics, visualizations, other data reported by the model monitoring feature
Costs
This tutorial uses billable components of Google Cloud:
Vertext AI
BigQuery
Learn about Vertext AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
The example model
The model you'll use in this notebook is based on this blog post. The idea behind this model is that your company has extensive log data describing how your game users have interacted with the site. The raw data contains the following categories of information:
identity - unique player identitity numbers
demographic features - information about the player, such as the geographic region in which a player is located
behavioral features - counts of the number of times a player has triggered certain game events, such as reaching a new level
churn propensity - this is the label or target feature, it provides an estimated probability that this player will churn, i.e. stop being an active player.
The blog article referenced above explains how to use BigQuery to store the raw data, pre-process it for use in machine learning, and train a model. Because this notebook focuses on model monitoring, rather than training models, you're going to reuse a pre-trained version of this model, which has been exported to Google Cloud Storage. In the next section, you will setup your environment and import this model into your own project.
Before you begin
Setup your dependencies
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
REGION = "us-central1"
SUFFIX = "aiplatform.googleapis.com"
API_ENDPOINT = f"{REGION}-{SUFFIX}"
PREDICT_API_ENDPOINT = f"{REGION}-prediction-{SUFFIX}"
if os.getenv("IS_TESTING"):
!gcloud --quiet components install beta
!gcloud --quiet components update
!gcloud config set project $PROJECT_ID
!gcloud config set ai/region $REGION
os.environ["GOOGLE_CLOUD_PROJECT"] = PROJECT_ID
Explanation: Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
If you are running this notebook locally, you will need to install the Cloud SDK.
You'll use the gcloud command throughout this notebook. In the following cell, enter your project name and run the cell to authenticate yourself with the Google Cloud and initialize your gcloud configuration settings.
For this lab, we're going to use region us-central1 for all our resources (BigQuery training data, Cloud Storage bucket, model and endpoint locations, etc.). Those resources can be deployed in other regions, as long as they're consistently co-located, but we're going to use one fixed region to keep things as simple and error free as possible.
End of explanation
# The Google Cloud Notebook product has specific requirements
IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version")
# If on Google Cloud Notebooks, then don't execute this code
if not IS_GOOGLE_CLOUD_NOTEBOOK:
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
!gcloud services enable aiplatform.googleapis.com
Explanation: Login to your Google Cloud account and enable AI services
End of explanation
# @title Utility functions
import copy
import os
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
from google.cloud.aiplatform_v1.services.endpoint_service import \
EndpointServiceClient
from google.cloud.aiplatform_v1.services.job_service import JobServiceClient
from google.cloud.aiplatform_v1.services.prediction_service import \
PredictionServiceClient
from google.cloud.aiplatform_v1.types.io import BigQuerySource
from google.cloud.aiplatform_v1.types.model_deployment_monitoring_job import (
ModelDeploymentMonitoringJob, ModelDeploymentMonitoringObjectiveConfig,
ModelDeploymentMonitoringScheduleConfig)
from google.cloud.aiplatform_v1.types.model_monitoring import (
ModelMonitoringAlertConfig, ModelMonitoringObjectiveConfig,
SamplingStrategy, ThresholdConfig)
from google.cloud.aiplatform_v1.types.prediction_service import (
ExplainRequest, PredictRequest)
from google.protobuf import json_format
from google.protobuf.duration_pb2 import Duration
from google.protobuf.struct_pb2 import Value
DEFAULT_THRESHOLD_VALUE = 0.001
def create_monitoring_job(objective_configs):
# Create sampling configuration.
random_sampling = SamplingStrategy.RandomSampleConfig(sample_rate=LOG_SAMPLE_RATE)
sampling_config = SamplingStrategy(random_sample_config=random_sampling)
# Create schedule configuration.
duration = Duration(seconds=MONITOR_INTERVAL)
schedule_config = ModelDeploymentMonitoringScheduleConfig(monitor_interval=duration)
# Create alerting configuration.
emails = [USER_EMAIL]
email_config = ModelMonitoringAlertConfig.EmailAlertConfig(user_emails=emails)
alerting_config = ModelMonitoringAlertConfig(email_alert_config=email_config)
# Create the monitoring job.
endpoint = f"projects/{PROJECT_ID}/locations/{REGION}/endpoints/{ENDPOINT_ID}"
predict_schema = ""
analysis_schema = ""
job = ModelDeploymentMonitoringJob(
display_name=JOB_NAME,
endpoint=endpoint,
model_deployment_monitoring_objective_configs=objective_configs,
logging_sampling_strategy=sampling_config,
model_deployment_monitoring_schedule_config=schedule_config,
model_monitoring_alert_config=alerting_config,
predict_instance_schema_uri=predict_schema,
analysis_instance_schema_uri=analysis_schema,
)
options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.create_model_deployment_monitoring_job(
parent=parent, model_deployment_monitoring_job=job
)
print("Created monitoring job:")
print(response)
return response
def get_thresholds(default_thresholds, custom_thresholds):
thresholds = {}
default_threshold = ThresholdConfig(value=DEFAULT_THRESHOLD_VALUE)
for feature in default_thresholds.split(","):
feature = feature.strip()
thresholds[feature] = default_threshold
for custom_threshold in custom_thresholds.split(","):
pair = custom_threshold.split(":")
if len(pair) != 2:
print(f"Invalid custom skew threshold: {custom_threshold}")
return
feature, value = pair
thresholds[feature] = ThresholdConfig(value=float(value))
return thresholds
def get_deployed_model_ids(endpoint_id):
client_options = dict(api_endpoint=API_ENDPOINT)
client = EndpointServiceClient(client_options=client_options)
parent = f"projects/{PROJECT_ID}/locations/{REGION}"
response = client.get_endpoint(name=f"{parent}/endpoints/{endpoint_id}")
model_ids = []
for model in response.deployed_models:
model_ids.append(model.id)
return model_ids
def set_objectives(model_ids, objective_template):
# Use the same objective config for all models.
objective_configs = []
for model_id in model_ids:
objective_config = copy.deepcopy(objective_template)
objective_config.deployed_model_id = model_id
objective_configs.append(objective_config)
return objective_configs
def send_predict_request(endpoint, input, type="predict"):
client_options = {"api_endpoint": PREDICT_API_ENDPOINT}
client = PredictionServiceClient(client_options=client_options)
if type == "predict":
obj = PredictRequest
method = client.predict
elif type == "explain":
obj = ExplainRequest
method = client.explain
else:
raise Exception("unsupported request type:" + type)
params = {}
params = json_format.ParseDict(params, Value())
request = obj(endpoint=endpoint, parameters=params)
inputs = [json_format.ParseDict(input, Value())]
request.instances.extend(inputs)
response = None
try:
response = method(request)
except Exception as ex:
print(ex)
return response
def list_monitoring_jobs():
client_options = dict(api_endpoint=API_ENDPOINT)
parent = f"projects/{PROJECT_ID}/locations/us-central1"
client = JobServiceClient(client_options=client_options)
response = client.list_model_deployment_monitoring_jobs(parent=parent)
print(response)
def pause_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.pause_model_deployment_monitoring_job(name=job)
print(response)
def delete_monitoring_job(job):
client_options = dict(api_endpoint=API_ENDPOINT)
client = JobServiceClient(client_options=client_options)
response = client.delete_model_deployment_monitoring_job(name=job)
print(response)
# Sampling distributions for categorical features...
DAYOFWEEK = {1: 1040, 2: 1223, 3: 1352, 4: 1217, 5: 1078, 6: 1011, 7: 1110}
LANGUAGE = {
"en-us": 4807,
"en-gb": 678,
"ja-jp": 419,
"en-au": 310,
"en-ca": 299,
"de-de": 147,
"en-in": 130,
"en": 127,
"fr-fr": 94,
"pt-br": 81,
"es-us": 65,
"zh-tw": 64,
"zh-hans-cn": 55,
"es-mx": 53,
"nl-nl": 37,
"fr-ca": 34,
"en-za": 29,
"vi-vn": 29,
"en-nz": 29,
"es-es": 25,
}
OS = {"IOS": 3980, "ANDROID": 3798, "null": 253}
MONTH = {6: 3125, 7: 1838, 8: 1276, 9: 1718, 10: 74}
COUNTRY = {
"United States": 4395,
"India": 486,
"Japan": 450,
"Canada": 354,
"Australia": 327,
"United Kingdom": 303,
"Germany": 144,
"Mexico": 102,
"France": 97,
"Brazil": 93,
"Taiwan": 72,
"China": 65,
"Saudi Arabia": 49,
"Pakistan": 48,
"Egypt": 46,
"Netherlands": 45,
"Vietnam": 42,
"Philippines": 39,
"South Africa": 38,
}
# Means and standard deviations for numerical features...
MEAN_SD = {
"julianday": (204.6, 34.7),
"cnt_user_engagement": (30.8, 53.2),
"cnt_level_start_quickplay": (7.8, 28.9),
"cnt_level_end_quickplay": (5.0, 16.4),
"cnt_level_complete_quickplay": (2.1, 9.9),
"cnt_level_reset_quickplay": (2.0, 19.6),
"cnt_post_score": (4.9, 13.8),
"cnt_spend_virtual_currency": (0.4, 1.8),
"cnt_ad_reward": (0.1, 0.6),
"cnt_challenge_a_friend": (0.0, 0.3),
"cnt_completed_5_levels": (0.1, 0.4),
"cnt_use_extra_steps": (0.4, 1.7),
}
DEFAULT_INPUT = {
"cnt_ad_reward": 0,
"cnt_challenge_a_friend": 0,
"cnt_completed_5_levels": 1,
"cnt_level_complete_quickplay": 3,
"cnt_level_end_quickplay": 5,
"cnt_level_reset_quickplay": 2,
"cnt_level_start_quickplay": 6,
"cnt_post_score": 34,
"cnt_spend_virtual_currency": 0,
"cnt_use_extra_steps": 0,
"cnt_user_engagement": 120,
"country": "Denmark",
"dayofweek": 3,
"julianday": 254,
"language": "da-dk",
"month": 9,
"operating_system": "IOS",
"user_pseudo_id": "104B0770BAE16E8B53DF330C95881893",
}
Explanation: Define some helper functions and data structures
Run the following cell to define some utility functions used throughout this notebook. Although these functions are not critical to understand the main concepts, feel free to expand the cell if you're curious or want to dive deeper into how some of your API requests are made.
End of explanation
builder = SavedModelMetadataBuilder(
"gs://mco-mm/churn", outputs_to_explain=["churned_probs"]
)
builder.save_metadata(".")
md = builder.get_metadata()
del md["tags"]
del md["framework"]
Explanation: Generate model metadata for explainable AI
Run the following cell to extract metadata from the exported model, which is needed for generating the prediction explanations.
End of explanation
import json
MODEL_NAME = "churn"
IMAGE = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-5:latest"
ENDPOINT = "us-central1-aiplatform.googleapis.com"
churn_model_path = "gs://mco-mm/churn"
request_data = {
"model": {
"displayName": "churn",
"artifactUri": churn_model_path,
"containerSpec": {"imageUri": IMAGE},
"explanationSpec": {
"parameters": {"sampledShapleyAttribution": {"pathCount": 5}},
"metadata": md,
},
}
}
with open("request_data.json", "w") as outfile:
json.dump(request_data, outfile)
output = !curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://{ENDPOINT}/v1/projects/{PROJECT_ID}/locations/{REGION}/models:upload \
-d @request_data.json 2>/dev/null
# print(output)
MODEL_ID = output[1].split()[1].split("/")[5]
print(f"Model {MODEL_NAME}/{MODEL_ID} created.")
# If auto-testing this notebook, wait for model registration
if os.getenv("IS_TESTING"):
time.sleep(300)
Explanation: Import your model
The churn propensity model you'll be using in this notebook has been trained in BigQuery ML and exported to a Google Cloud Storage bucket. This illustrates how you can easily export a trained model and move a model from one cloud service to another.
Run the next cell to import this model into your project. If you've already imported your model, you can skip this step.
End of explanation
ENDPOINT_NAME = "churn"
output = !gcloud --quiet beta ai endpoints create --display-name=$ENDPOINT_NAME --format="value(name)"
# print("endpoint output: ", output)
ENDPOINT = output[-1]
ENDPOINT_ID = ENDPOINT.split("/")[-1]
output = !gcloud --quiet beta ai endpoints deploy-model $ENDPOINT_ID --display-name=$ENDPOINT_NAME --model=$MODEL_ID --traffic-split="0=100"
print(f"Model deployed to Endpoint {ENDPOINT_NAME}/{ENDPOINT_ID}.")
Explanation: This request will return immediately but it spawns an asynchronous task that takes several minutes. Periodically check the Vertex Models page on the Cloud Console and don't continue with this lab until you see your newly created model there. It should like something like this:
<br>
<br>
<img src="https://storage.googleapis.com/mco-general/img/mm0.png" />
<br>
Deploy your endpoint
Now that you've imported your model into your project, you need to create an endpoint to serve your model. An endpoint can be thought of as a channel through which your model provides prediction services. Once established, you'll be able to make prediction requests on your model via the public internet. Your endpoint is also serverless, in the sense that Google ensures high availability by reducing single points of failure, and scalability by dynamically allocating resources to meet the demand for your service. In this way, you are able to focus on your model quality, and freed from adminstrative and infrastructure concerns.
Run the next cell to deploy your model to an endpoint. This will take about ten minutes to complete.
End of explanation
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT)
for i in resp.predictions:
vals = i["churned_values"]
probs = i["churned_probs"]
for i in range(len(vals)):
print(vals[i], probs[i])
plt.pie(probs, labels=vals)
plt.show()
pp.pprint(resp)
except Exception as ex:
print("prediction request failed", ex)
Explanation: Run a prediction test
Now that you have imported a model and deployed that model to an endpoint, you are ready to verify that it's working. Run the next cell to send a test prediction request. If everything works as expected, you should receive a response encoded in a text representation called JSON, along with a pie chart summarizing the results.
Try this now by running the next cell and examine the results.
End of explanation
# print(ENDPOINT)
# pp.pprint(DEFAULT_INPUT)
try:
features = []
scores = []
resp = send_predict_request(ENDPOINT, DEFAULT_INPUT, type="explain")
for i in resp.explanations:
for j in i.attributions:
for k in j.feature_attributions:
features.append(k)
scores.append(j.feature_attributions[k])
features = [x for _, x in sorted(zip(scores, features))]
scores = sorted(scores)
fig, ax = plt.subplots()
fig.set_size_inches(9, 9)
ax.barh(features, scores)
fig.show()
# pp.pprint(resp)
except Exception as ex:
print("explanation request failed", ex)
Explanation: Taking a closer look at the results, we see the following elements:
churned_values - a set of possible values (0 and 1) for the target field
churned_probs - a corresponding set of probabilities for each possible target field value (5x10^-40 and 1.0, respectively)
predicted_churn - based on the probabilities, the predicted value of the target field (1)
This response encodes the model's prediction in a format that is readily digestible by software, which makes this service ideal for automated use by an application.
Run an explanation test
We can also run a test of explainable AI on this endpoint. Run the next cell to send a test explanation request. If everything works as expected, you should receive a response encoding the feature importance of this prediction in a text representation called JSON, along with a bar chart summarizing the results.
Try this now by running the next cell and examine the results.
End of explanation
USER_EMAIL = "" # @param {type:"string"}
JOB_NAME = "churn"
# Sampling rate (optional, default=.8)
LOG_SAMPLE_RATE = 0.8 # @param {type:"number"}
# Monitoring Interval in seconds (optional, default=3600).
MONITOR_INTERVAL = 3600 # @param {type:"number"}
# URI to training dataset.
DATASET_BQ_URI = "bq://mco-mm.bqmlga4.train" # @param {type:"string"}
# Prediction target column name in training dataset.
TARGET = "churned"
# Skew and drift thresholds.
SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
SKEW_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
DRIFT_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
DRIFT_CUSTOM_THRESHOLDS = "cnt_level_start_quickplay:.01" # @param {type:"string"}
ATTRIB_SKEW_DEFAULT_THRESHOLDS = "country,cnt_user_engagement" # @param {type:"string"}
ATTRIB_SKEW_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
ATTRIB_DRIFT_DEFAULT_THRESHOLDS = (
"country,cnt_user_engagement" # @param {type:"string"}
)
ATTRIB_DRIFT_CUSTOM_THRESHOLDS = (
"cnt_level_start_quickplay:.01" # @param {type:"string"}
)
Explanation: Start your monitoring job
Now that you've created an endpoint to serve prediction requests on your model, you're ready to start a monitoring job to keep an eye on model quality and to alert you if and when input begins to deviate in way that may impact your model's prediction quality.
In this section, you will configure and create a model monitoring job based on the churn propensity model you imported from BigQuery ML.
Configure the following fields:
Log sample rate - Your prediction requests and responses are logged to BigQuery tables, which are automatically created when you create a monitoring job. This parameter specifies the desired logging frequency for those tables.
Monitor interval - time window over which to analyze your data and report anomalies. The minimum window is one hour (3600 seconds)
Target field - prediction target column name in training dataset
Skew detection threshold - skew threshold for each feature you want to monitor
Prediction drift threshold - drift threshold for each feature you want to monitor
Attribution Skew detection threshold - feature importance skew threshold
Attribution Prediction drift threshold - feature importance drift threshold
End of explanation
skew_thresholds = get_thresholds(SKEW_DEFAULT_THRESHOLDS, SKEW_CUSTOM_THRESHOLDS)
drift_thresholds = get_thresholds(DRIFT_DEFAULT_THRESHOLDS, DRIFT_CUSTOM_THRESHOLDS)
attrib_skew_thresholds = get_thresholds(
ATTRIB_SKEW_DEFAULT_THRESHOLDS, ATTRIB_SKEW_CUSTOM_THRESHOLDS
)
attrib_drift_thresholds = get_thresholds(
ATTRIB_DRIFT_DEFAULT_THRESHOLDS, ATTRIB_DRIFT_CUSTOM_THRESHOLDS
)
skew_config = ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig(
skew_thresholds=skew_thresholds,
attribution_score_skew_thresholds=attrib_skew_thresholds,
)
drift_config = ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig(
drift_thresholds=drift_thresholds,
attribution_score_drift_thresholds=attrib_drift_thresholds,
)
explanation_config = ModelMonitoringObjectiveConfig.ExplanationConfig(
enable_feature_attributes=True
)
training_dataset = ModelMonitoringObjectiveConfig.TrainingDataset(target_field=TARGET)
training_dataset.bigquery_source = BigQuerySource(input_uri=DATASET_BQ_URI)
objective_config = ModelMonitoringObjectiveConfig(
training_dataset=training_dataset,
training_prediction_skew_detection_config=skew_config,
prediction_drift_detection_config=drift_config,
explanation_config=explanation_config,
)
model_ids = get_deployed_model_ids(ENDPOINT_ID)
objective_template = ModelDeploymentMonitoringObjectiveConfig(
objective_config=objective_config
)
objective_configs = set_objectives(model_ids, objective_template)
monitoring_job = create_monitoring_job(objective_configs)
# Run a prediction request to generate schema, if necessary.
try:
_ = send_predict_request(ENDPOINT, DEFAULT_INPUT)
print("prediction succeeded")
except Exception:
print("prediction failed")
Explanation: Create your monitoring job
The following code uses the Google Python client library to translate your configuration settings into a programmatic request to start a model monitoring job. Instantiating a monitoring job can take some time. If everything looks good with your request, you'll get a successful API response. Then, you'll need to check your email to receive a notification that the job is running.
End of explanation
!gsutil ls gs://cloud-ai-platform-fdfb4810-148b-4c86-903c-dbdff879f6e1/*/*
Explanation: After a minute or two, you should receive email at the address you configured above for USER_EMAIL. This email confirms successful deployment of your monitoring job. Here's a sample of what this email might look like:
<br>
<br>
<img src="https://storage.googleapis.com/mco-general/img/mm6.png" />
<br>
As your monitoring job collects data, measurements are stored in Google Cloud Storage and you are free to examine your data at any time. The circled path in the image above specifies the location of your measurements in Google Cloud Storage. Run the following cell to see an example of the layout of these measurements in Cloud Storage. If you substitute the Cloud Storage URL in your job creation email, you can view the structure and content of the data files for your own monitoring job.
End of explanation
def random_uid():
digits = [str(i) for i in range(10)] + ["A", "B", "C", "D", "E", "F"]
return "".join(random.choices(digits, k=32))
def monitoring_test(count, sleep, perturb_num={}, perturb_cat={}):
# Use random sampling and mean/sd with gaussian distribution to model
# training data. Then modify sampling distros for two categorical features
# and mean/sd for two numerical features.
mean_sd = MEAN_SD.copy()
country = COUNTRY.copy()
for k, (mean_fn, sd_fn) in perturb_num.items():
orig_mean, orig_sd = MEAN_SD[k]
mean_sd[k] = (mean_fn(orig_mean), sd_fn(orig_sd))
for k, v in perturb_cat.items():
country[k] = v
for i in range(0, count):
input = DEFAULT_INPUT.copy()
input["user_pseudo_id"] = str(random_uid())
input["country"] = random.choices([*country], list(country.values()))[0]
input["dayofweek"] = random.choices([*DAYOFWEEK], list(DAYOFWEEK.values()))[0]
input["language"] = str(random.choices([*LANGUAGE], list(LANGUAGE.values()))[0])
input["operating_system"] = str(random.choices([*OS], list(OS.values()))[0])
input["month"] = random.choices([*MONTH], list(MONTH.values()))[0]
for key, (mean, sd) in mean_sd.items():
sample_val = round(float(np.random.normal(mean, sd, 1)))
val = max(sample_val, 0)
input[key] = val
print(f"Sending prediction {i}")
try:
send_predict_request(ENDPOINT, input)
except Exception:
print("prediction request failed")
time.sleep(sleep)
print("Test Completed.")
start = 2
end = 3
for multiplier in range(start, end + 1):
test_time = 300
tests_per_sec = 1
sleep_time = 1 / tests_per_sec
iterations = test_time * tests_per_sec
perturb_num = {
"cnt_level_start_quickplay": (
lambda x: x * multiplier,
lambda x: x / multiplier,
)
}
perturb_cat = {"Japan": max(COUNTRY.values()) * multiplier}
monitoring_test(iterations, sleep_time, perturb_num, perturb_cat)
if multiplier < end:
print("sleeping...")
time.sleep(60)
Explanation: You will notice the following components in these Cloud Storage paths:
cloud-ai-platform-.. - This is a bucket created for you and assigned to capture your service's prediction data. Each monitoring job you create will trigger creation of a new folder in this bucket.
[model_monitoring|instance_schemas]/job-.. - This is your unique monitoring job number, which you can see above in both the response to your job creation requesst and the email notification.
instance_schemas/job-../analysis - This is the monitoring jobs understanding and encoding of your training data's schema (field names, types, etc.).
instance_schemas/job-../predict - This is the first prediction made to your model after the current monitoring job was enabled.
model_monitoring/job-../serving - This folder is used to record data relevant to drift calculations. It contains measurement summaries for every hour your model serves traffic.
model_monitoring/job-../training - This folder is used to record data relevant to training-serving skew calculations. It contains an ongoing summary of prediction data relative to training data.
model_monitoring/job-../feature_attribution_score - This folder is used to record data relevant to feature attribution calculations. It contains an ongoing summary of feature attribution scores relative to training data.
You can create monitoring jobs with other user interfaces
In the previous cells, you created a monitoring job using the Python client library. You can also use the gcloud command line tool to create a model monitoring job and, in the near future, you will be able to use the Cloud Console, as well for this function.
Generate test data to trigger alerting
Now you are ready to test the monitoring function. Run the following cell, which will generate fabricated test predictions designed to trigger the thresholds you specified above. This cell runs two five minute tests, one minute apart, so it should take roughly eleven minutes to complete the test.
The first test sends 300 fabricated requests (one per second for five minutes) while perturbing two features of interest (cnt_level_start_quickplay and country) by a factor of two. The second test does the same thing but perturbs the selected feature distributions by a factor of three. By perturbing data in two experiments, we're able to trigger both skew and drift alerts.
After running this test, it takes at least an hour to assess and report skew and drift alerts so feel free to proceed with the notebook now and you'll see how to examine the resulting alerts later.
End of explanation
# Delete endpoint resource
!gcloud ai endpoints delete $ENDPOINT_NAME --quiet
# Delete model resource
!gcloud ai models delete $MODEL_NAME --quiet
Explanation: Interpret your results
While waiting for your results, which, as noted, may take up to an hour, you can read ahead to get sense of the alerting experience.
Here's what a sample email alert looks like...
<img src="https://storage.googleapis.com/mco-general/img/mm7.png" />
This email is warning you that the cnt_level_start_quickplay, cnt_user_engagement, and country feature values seen in production have skewed above your threshold between training and serving your model. It's also telling you that the cnt_user_engagement and country feature attribution values are skewed relative to your training data, again, as per your threshold specification.
Monitoring results in the Cloud Console
You can examine your model monitoring data from the Cloud Console. Below is a screenshot of those capabilities.
Monitoring Status
You can verify that a given endpoint has an active model monitoring job via the Endpoint summary page:
<img src="https://storage.googleapis.com/mco-general/img/mm1.png" />
Monitoring Alerts
You can examine the alert details by clicking into the endpoint of interest, and selecting the alerts panel:
<img src="https://storage.googleapis.com/mco-general/img/mm2.png" />
Feature Value Distributions
You can also examine the recorded training and production feature distributions by drilling down into a given feature, like this:
<img src="https://storage.googleapis.com/mco-general/img/mm9.png" />
which yields graphical representations of the feature distrubution during both training and production, like this:
<img src="https://storage.googleapis.com/mco-general/img/mm8.png" />
Clean up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
End of explanation |
11,880 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
Step1: Numerical stability, dispersion and anisotropy of the 2D acoustic finite difference modelling code
Similar to the 1D acoustic FD modelling code, we have to investigate the stability and dispersion of the 2D numerical scheme. Additionally, the numerical dispersion shows in the 2D case an anisotropic behaviour.
Let's begin with the CFL-stability criterion ...
CFL-stability criterion for the 2D acoustic FD modelling code
As for the 1D code, the maximum size of the timestep $dt$ is limited by the Courant-Friedrichs-Lewy (CFL) criterion
Step2: To seperate modelling and visualization of the results, we introduce the following plotting function
Step3: Numerical Grid Dispersion
While the FD solution above is stable, it is subject to some numerical dispersion when compared with the analytical solution. The grid point distance $dx = 10\; m$, P-wave velocity $vp = 3000\; m/s$ and a maximum frequency $f_{max} \approx 2*f_0 = 40\; Hz$ leads to ...
Step4: $N_\lambda = 7.5$ gridpoints per minimum wavelength. Let's increase it to $N_\lambda = 12$, which yields ...
Step5: ... an improved fit of the 2D analytical by the FD solution.
Numerical Anisotropy
Compared to the 1D acoustic case, the numerical dispersion behaves a little bit differently in the 2D FD approximation. To illustrate this problem, we model the pressure wavefield for $t_{max} = 0.8\; s$for a fixed grid point distance of $dx = 10\;m$ and a centre frequency of the source wavelet $f0 = 100\; Hz$ which corresonds to $N_\lambda = 1.5$ grid points per minimum wavelength. | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
# Definition of modelling parameters
# ----------------------------------
xmax = 5000.0 # maximum spatial extension of the 1D model in x-direction (m)
zmax = xmax # maximum spatial extension of the 1D model in z-direction (m)
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
tmax = 0.8 # maximum recording time of the seismogram (s)
dt = 0.0010 # time step
vp0 = 3000. # P-wave speed in medium (m/s)
# acquisition geometry
xr = 2000.0 # x-receiver position (m)
zr = xr # z-receiver position (m)
xsrc = 2500.0 # x-source position (m)
zsrc = xsrc # z-source position (m)
f0 = 20. # dominant frequency of the source (Hz)
t0 = 4. / f0 # source time shift (s)
# FD_2D_acoustic code with JIT optimization
# -----------------------------------------
@jit(nopython=True) # use Just-In-Time (JIT) Compilation for C-performance
def FD_2D_acoustic_JIT(dt,dx,dz,f0):
# define model discretization
# ---------------------------
nx = (int)(xmax/dx) # number of grid points in x-direction
print('nx = ',nx)
nz = (int)(zmax/dz) # number of grid points in x-direction
print('nz = ',nz)
nt = (int)(tmax/dt) # maximum number of time steps
print('nt = ',nt)
ir = (int)(xr/dx) # receiver location in grid in x-direction
jr = (int)(zr/dz) # receiver location in grid in z-direction
isrc = (int)(xsrc/dx) # source location in grid in x-direction
jsrc = (int)(zsrc/dz) # source location in grid in x-direction
# Source time function (Gaussian)
# -------------------------------
src = np.zeros(nt + 1)
time = np.linspace(0 * dt, nt * dt, nt)
# 1st derivative of a Gaussian
src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2))
# Analytical solution
# -------------------
G = time * 0.
# Initialize coordinates
# ----------------------
x = np.arange(nx)
x = x * dx # coordinates in x-direction (m)
z = np.arange(nz)
z = z * dz # coordinates in z-direction (m)
# calculate source-receiver distance
r = np.sqrt((x[ir] - x[isrc])**2 + (z[jr] - z[jsrc])**2)
for it in range(nt): # Calculate Green's function (Heaviside function)
if (time[it] - r / vp0) >= 0:
G[it] = 1. / (2 * np.pi * vp0**2) * (1. / np.sqrt(time[it]**2 - (r/vp0)**2))
Gc = np.convolve(G, src * dt)
Gc = Gc[0:nt]
# Initialize model (assume homogeneous model)
# -------------------------------------------
vp = np.zeros((nx,nz))
vp2 = np.zeros((nx,nz))
vp = vp + vp0 # initialize wave velocity in model
vp2 = vp**2
# Initialize empty pressure arrays
# --------------------------------
p = np.zeros((nx,nz)) # p at time n (now)
pold = np.zeros((nx,nz)) # p at time n-1 (past)
pnew = np.zeros((nx,nz)) # p at time n+1 (present)
d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p
d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p
# Initialize empty seismogram
# ---------------------------
seis = np.zeros(nt)
# Calculate Partial Derivatives
# -----------------------------
for it in range(nt):
# FD approximation of spatial derivative by 3 point operator
for i in range(1, nx - 1):
for j in range(1, nz - 1):
d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2
d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2
# Time Extrapolation
# ------------------
pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz)
# Add Source Term at isrc
# -----------------------
# Absolute pressure w.r.t analytical solution
pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2
# Remap Time Levels
# -----------------
pold, p = p, pnew
# Output of Seismogram
# -----------------
seis[it] = p[ir,jr]
return time, seis, Gc, p # return last pressure wave field snapshot
Explanation: Numerical stability, dispersion and anisotropy of the 2D acoustic finite difference modelling code
Similar to the 1D acoustic FD modelling code, we have to investigate the stability and dispersion of the 2D numerical scheme. Additionally, the numerical dispersion shows in the 2D case an anisotropic behaviour.
Let's begin with the CFL-stability criterion ...
CFL-stability criterion for the 2D acoustic FD modelling code
As for the 1D code, the maximum size of the timestep $dt$ is limited by the Courant-Friedrichs-Lewy (CFL) criterion:
\begin{equation}
dt \le \frac{dx}{\zeta v_{max}}, \nonumber
\end{equation}
where $dx$ denotes the spatial grid point distance and $v_{max}$ the maximum P-wave velocity. The factor $\zeta$ depends on the used FD-operators, dimension and numerical scheme.
As for the 1D case, we estimate the factor $\zeta$ by the von Neumann analysis, starting with the finite difference approximation of the 2D acoustic wave equation
\begin{equation}
\frac{p_{j,l}^{n+1} - 2 p_{j,l}^n + p_{j,l}^{n-1}}{\mathrm{d}t^2} \ = \ vp_{j,l}^2\biggl(\frac{p_{j,l+1}^{n} - 2 p_{j,l}^n + p_{j,l-1}^{n}}{\mathrm{d}x^2} + \frac{p_{j+1,l}^{n} - 2 p_{j,l}^n + p_{j-1,l}^{n}}{\mathrm{d}z^2}\biggr),
\end{equation}
and assuming harmonic plane wave solutions for the pressure wavefield of the form:
\begin{equation}
p = exp(i(k_x x + k_z z -\omega t)),\nonumber
\end{equation}
with $i^2=-1$, the wavenumbers $(k_x, k_z)$ in x-/z-direction, respectively, and the circular frequency $\omega$. Using a regular grid with
$dx = dz = dh,$
discrete spatial coordinates
$x_j = j dh,$
$z_l = l dh,$
and times
$t_n = n dt.$
we can calculate discrete plane wave solutions at the discrete locations and times in eq. (1):
\begin{align}
p_{j,l}^{n+1} &= exp(-i\omega dt)\; p_{j,l}^{n},\
p_{j,l}^{n-1} &= exp(i\omega dt)\; p_{j,l}^{n},\
p_{j+1,l}^{n} &= exp(ik_x dh)\; p_{j,l}^{n},\
p_{j-1,l}^{n} &= exp(-ik_x dh)\; p_{j,l}^{n},\
p_{j,l+1}^{n} &= exp(ik_z dh)\; p_{j,l}^{n},\
p_{j,l-1}^{n} &= exp(-ik_z dh)\; p_{j,l}^{n},\
\end{align}
Inserting eqs. (2) - (7) into eq. (1), division by $p_{j,l}^{n}$ and using the definition:
\begin{equation}
\cos(x) = \frac{exp(ix) + exp(-ix)}{2},\nonumber
\end{equation}
yields:
\begin{equation}
cos(\omega dt) - 1 = vp_{j,l}^2 \frac{dt^2}{dh^2}\biggl({cos(k_x dh) - 1} + {cos(k_z dh) - 1}\biggr).\nonumber
\end{equation}
Some further rearrangements and division of both sides by 2, leads to:
\begin{equation}
\frac{1 - cos(\omega dt)}{2} = vp_{j,l}^2 \frac{dt^2}{dh^2}\biggl(\frac{1 - cos(k_x dh)}{2} + \frac{1 - cos(k_z dh)}{2}\biggr).\nonumber
\end{equation}
With the relation
\begin{equation}
sin^2\biggl(\frac{x}{2}\biggr) = \frac{1-cos(x)}{2}, \nonumber
\end{equation}
we get
\begin{equation}
sin^2\biggl(\frac{\omega dt}{2}\biggr) = vp_{j,l}^2 \frac{dt^2}{dh^2}\biggl(sin^2\biggl(\frac{k_x dh}{2}\biggr)+sin^2\biggl(\frac{k_z dh}{2}\biggr)\biggr). \nonumber
\end{equation}
Taking the square root of both sides finally yields
\begin{equation}
sin\biggl(\frac{\omega dt}{2}\biggr) = vp_{j,l} \frac{dt}{dh}\sqrt{sin^2\biggl(\frac{k_x dh}{2}\biggr)+sin^2\biggl(\frac{k_z dh}{2}\biggr)}.
\end{equation}
This result implies, that if the Courant number
\begin{equation}
\epsilon = vp_{j,l} \frac{dt}{dh} \nonumber
\end{equation}
is larger than $1/\sqrt{2}$, you get only imaginary solutions, while the real part is zero (think for a second why). Consequently, the numerical scheme becomes unstable, when the following CFL-criterion is violated
\begin{equation}
\epsilon = vp_{j,l} \frac{dt}{dh} \le \frac{1}{\sqrt{2}} \nonumber
\end{equation}
Rearrangement to the time step dt, assuming that we have defined a spatial grid point distance dh and replacing $vp_{j,l}$ by the maximum P-wave velocity in the FD model $v_{max}$, leads to
\begin{equation}
dt \le \frac{dh}{\sqrt{2}v_{max}}. \nonumber
\end{equation}
Therefore, the factor $\zeta$ in the general CFL-criterion
\begin{equation}
dt \le \frac{dh}{\zeta vp_j}, \nonumber
\end{equation}
for the FD solution of the 2D acoustic wave equation using the temporal/spatial 3-point operator to approximate the 2nd derivative is $\zeta = \sqrt{2}$.
Let's check if this result ist correct:
End of explanation
# Compare FD Seismogram with analytical solution
# ----------------------------------------------
def plot_seis(time,seis_FD,seis_analy):
# Define figure size
rcParams['figure.figsize'] = 12, 5
plt.plot(time, seis_FD, 'b-',lw=3,label="FD solution") # plot FD seismogram
plt.plot(time, seis_analy,'r--',lw=3,label="Analytical solution") # plot analytical solution
plt.xlim(time[0], time[-1])
plt.title('Seismogram')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
f0 = 20 # centre frequency of the source wavelet (Hz)
# define
zeta = np.sqrt(2)
# calculate dt according to the CFL-criterion
dt = dx / (zeta * vp0)
%time time, seis_FD, seis_analy, p = FD_2D_acoustic_JIT(dt,dx,dz,f0)
plot_seis(time,seis_FD,seis_analy)
Explanation: To seperate modelling and visualization of the results, we introduce the following plotting function:
End of explanation
fmax = 2 * f0
N_lam = vp0 / (dx * fmax)
print("N_lam = ",N_lam)
Explanation: Numerical Grid Dispersion
While the FD solution above is stable, it is subject to some numerical dispersion when compared with the analytical solution. The grid point distance $dx = 10\; m$, P-wave velocity $vp = 3000\; m/s$ and a maximum frequency $f_{max} \approx 2*f_0 = 40\; Hz$ leads to ...
End of explanation
N_lam = 12
dx = vp0 / (N_lam * fmax)
dz = dx # grid point distance in z-direction (m)
f0 = 20 # centre frequency of the source wavelet (Hz)
# define
zeta = np.sqrt(2)
# calculate dt according to the CFL-criterion
dt = dx / (zeta * vp0)
%time time, seis_FD, seis_analy, p = FD_2D_acoustic_JIT(dt,dx,dz,f0)
plot_seis(time,seis_FD,seis_analy)
Explanation: $N_\lambda = 7.5$ gridpoints per minimum wavelength. Let's increase it to $N_\lambda = 12$, which yields ...
End of explanation
# define dx/dz and calculate dt according to the CFL-criterion
dx = 10.0 # grid point distance in x-direction (m)
dz = dx # grid point distance in z-direction (m)
# define zeta for the CFL criterion
zeta = np.sqrt(2)
dt = dx / (zeta * vp0)
f0 = 100
time, seis_FD, seis_analy, p = FD_2D_acoustic_JIT(dt,dx,dz,f0)
# Plot last pressure wavefield snapshot at Tmax = 0.8 s
# -----------------------------------------------------
rcParams['figure.figsize'] = 8, 8 # define figure size
clip = 1e-7 # image clipping
extent = [0.0, xmax/1000, 0.0, zmax/1000]
# Plot wavefield snapshot at tmax = 0.8 s
plt.imshow(p.T,interpolation='none',cmap='seismic',vmin=-clip,vmax=clip,extent=extent)
plt.title('Numerical anisotropy')
plt.xlabel('x (km)')
plt.ylabel('z (km)')
plt.show()
Explanation: ... an improved fit of the 2D analytical by the FD solution.
Numerical Anisotropy
Compared to the 1D acoustic case, the numerical dispersion behaves a little bit differently in the 2D FD approximation. To illustrate this problem, we model the pressure wavefield for $t_{max} = 0.8\; s$for a fixed grid point distance of $dx = 10\;m$ and a centre frequency of the source wavelet $f0 = 100\; Hz$ which corresonds to $N_\lambda = 1.5$ grid points per minimum wavelength.
End of explanation |
11,881 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How to debug a model
There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.
Because PyMC3 uses Theano expressions to build the model, and not functions, there is no way to place a print statement into a likelihood function. Instead, you can use the Theano Print operatator. For more information, see
Step1: Hm, looks like something has gone wrong, but what? Let's look at the values getting proposed using the Print operator
Step2: Looks like sd is always 0 which will cause the logp to go to -inf. Of course, we should not have used a prior that has negative mass for sd but instead something like a HalfNormal.
We can also redirect the output to a string buffer and access the proposed values later on (thanks to Lindley Lentati for providing this example) | Python Code:
%matplotlib inline
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import theano.tensor as T
x = np.random.randn(100)
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
sd = pm.Normal('sd', mu=0, sd=1)
obs = pm.Normal('obs', mu=mu, sd=sd, observed=x)
step = pm.Metropolis()
trace = pm.sample(5000, step)
pm.traceplot(trace);
Explanation: How to debug a model
There are various levels on which to debug a model. One of the simplest is to just print out the values that different variables are taking on.
Because PyMC3 uses Theano expressions to build the model, and not functions, there is no way to place a print statement into a likelihood function. Instead, you can use the Theano Print operatator. For more information, see: theano Print operator for this before: http://deeplearning.net/software/theano/tutorial/debug_faq.html#how-do-i-print-an-intermediate-value-in-a-function.
Let's build a simple model with just two parameters:
End of explanation
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
sd = pm.Normal('sd', mu=0, sd=1)
mu_print = T.printing.Print('mu')(mu)
sd_print = T.printing.Print('sd')(sd)
obs = pm.Normal('obs', mu=mu_print, sd=sd_print, observed=x)
step = pm.Metropolis()
trace = pm.sample(3, step) # Make sure not to draw too many samples
Explanation: Hm, looks like something has gone wrong, but what? Let's look at the values getting proposed using the Print operator:
End of explanation
from io import StringIO
import sys
x = np.random.randn(100)
old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
with pm.Model() as model:
mu = pm.Normal('mu', mu=0, sd=1)
sd = pm.Normal('sd', mu=0, sd=1)
mu_print = T.printing.Print('mu')(mu)
sd_print = T.printing.Print('sd')(sd)
obs = pm.Normal('obs', mu=mu_print, sd=sd_print, observed=x)
step = pm.Metropolis()
trace = pm.sample(3, step) # Make sure not to draw too many samples
sys.stdout = old_stdout
output = mystdout.getvalue().split('\n')
mulines = [s for s in output if 'mu' in s]
muvals = [line.split()[-1] for line in mulines]
plt.plot(np.arange(0,len(muvals)), muvals);
plt.xlabel('proposal iteration')
plt.ylabel('mu value')
Explanation: Looks like sd is always 0 which will cause the logp to go to -inf. Of course, we should not have used a prior that has negative mass for sd but instead something like a HalfNormal.
We can also redirect the output to a string buffer and access the proposed values later on (thanks to Lindley Lentati for providing this example):
End of explanation |
11,882 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Understanding the use of the Merkle-Patricia tree in Ethereum
Ethereum innovated when designing their blockchain, in many ways when compared to Bitcoin. One of them was my enhancing the kind of Merkle tree used within the blocks and how they are used.
In Ethereum, in contrast to the Bitcoim blockchain, they use three Merkle trees inside each block
Step1: Let's start by creating an empty tree (or trie)
Step2: The Hexary trie stores both keys and values, like a dictionary, and can be accessed like a dictionary.
Step3: But it also has a simple API with methods you can call in the tree | Python Code:
from trie import HexaryTrie
Explanation: Understanding the use of the Merkle-Patricia tree in Ethereum
Ethereum innovated when designing their blockchain, in many ways when compared to Bitcoin. One of them was my enhancing the kind of Merkle tree used within the blocks and how they are used.
In Ethereum, in contrast to the Bitcoim blockchain, they use three Merkle trees inside each block:
* one for transactions
* another for receipts (data showing the effects of transactions)
* and the third for the State
<img src="ethblockchain_full.png" width=50% />
In this noteboook we will explore using python code how all this merkling works. For that you will need to install the trie package maintained byt the folks at Ethereum:
pip install --user trie
End of explanation
t = HexaryTrie(db={})
t.root_hash
Explanation: Let's start by creating an empty tree (or trie):
End of explanation
t[b'my-key'] = b'some-value'
t[b'my-key']
b'another-key' in t
t[b'another-key'] = b'another-value'
b'another-key' in t
t.root_hash
Explanation: The Hexary trie stores both keys and values, like a dictionary, and can be accessed like a dictionary.
End of explanation
[m for m in dir(t) if not m.startswith('_')]
t.set(b'my-key2',b'second value')
t.exists(b'my-key2')
t.get(b'my-key2')
t.
t.db
Explanation: But it also has a simple API with methods you can call in the tree
End of explanation |
11,883 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quiver plots based on topographic aspect
Disclaimer
Step2: Note that we are setting the origin of the arrow (x_pos and y_pos) and then the direction (x_direct and y_direct) as coordinate pairs. We'll look at how you get these from a bearing value shortly. We'll be talking about these then as vector magnitue and direction.
There's something important to be aware of here. If you're thinking like a geographer and not a mathemativian, you'd probably expect an aspect of 0°N to point to 12 o-clock. However, if you were to pass that 0° to quiver in matplotlib, it will point to 3 o'clock. This is because we're dealing with degrees and not compass points.
This is called standard position.
Bearings relative to compass north and standard posiiton are 2 conventions for considering bearings or angles.
* relative to compass north
Step3: So if we give it 90°N (so East on a compass), we should get 0 in standard position.
Step5: Earlier, the start and end locations of the arrow or quiver were mentioned. We now need to condiser vector magnitude and direction. | Python Code:
import numpy as np
import matplotlib.pyplot as plt
def single_quiver(x_pos,y_pos,x_direct,y_direct, title=''):
fig, ax = plt.subplots()
ax.quiver(x_pos,y_pos,x_direct,y_direct, scale=5)
ax.axis([-2,2,-2,2])
if title !='':
plt.title(title)
plt.show()
#Aspect example: If it was 90 deg, that would be
x_pos = [0]
y_pos = [0]
x_direct = [1]
y_direct = [0]
title="x_pos:%i x_direct:%i | y_pos:%i y_direct:%i" %(x_pos[0], x_direct[0], y_pos[0], y_direct[0])
single_quiver(x_pos,y_pos,x_direct,y_direct, title=title)
Explanation: Quiver plots based on topographic aspect
Disclaimer : this post is not intended to teach you the maths underlying vectors but it might help you get started and signpost the way to further learing.
The aspect of a topographic surface shows the downslope orietnation of a designated "portion" of land, for a raster, this "portion" will be each cell. The aspect value is normally provided in degrees relative to North. Plotting aspect rasters is often done by colour. For example, all cells of a raster with aspect values in say between 22.5°N – 67.5°N which can be classified as NE will be given a specific colour. This will then differ to the colour assigned to cells where values fall within the range of South West and so on.
<img src="images/quiver/ukso_aspect.png">
Plotting these surfaces in a GIS program such as ArcMap or QGIS is a case of playing with the raster symbology. You can also then add arrows to show the aspect direction where a north facing pixel will have a north facing arrow etc. Check out the various help pages for ArcMap and QGIS on how to do this.
Where you are writing your own program, this GIS solution is not always a viable method. Fortunately, Python's matplotlib can help you out here. It's called quiver plotting. Here's an example.
End of explanation
def compassBearing_to_standardPosition__degrees_counterClockwise(bearing_deg=''):
Vector magnitude and direction calculations assume angle is relative to the x axis (i.e. 0 degrees is at 3 o'clock)
Adjust compass bearings to be relative to standard poisiton
Help: https://math.stackexchange.com/questions/492167/calculate-the-angle-of-a-vector-in-compass-360-direction
if bearing_deg=='':
north_bearings=[0,90,180,270,360]
print("Converts angles from compass convention (clockwise from North) to standard position (anti-clockwise from East)")
print(" ")
print("Example:")
print("Compass bearing : Standard position angle")
for bearing in north_bearings:
print("%i : %i" %(bearing, ((450 - bearing) % 360)))
print("Provide a bearing to get the equivalent in standard poisiton....")
else:
std_pos=(450 - bearing_deg) % 360
return(std_pos)
compassBearing_to_standardPosition__degrees_counterClockwise(bearing_deg="")
Explanation: Note that we are setting the origin of the arrow (x_pos and y_pos) and then the direction (x_direct and y_direct) as coordinate pairs. We'll look at how you get these from a bearing value shortly. We'll be talking about these then as vector magnitue and direction.
There's something important to be aware of here. If you're thinking like a geographer and not a mathemativian, you'd probably expect an aspect of 0°N to point to 12 o-clock. However, if you were to pass that 0° to quiver in matplotlib, it will point to 3 o'clock. This is because we're dealing with degrees and not compass points.
This is called standard position.
Bearings relative to compass north and standard posiiton are 2 conventions for considering bearings or angles.
* relative to compass north: angles are clockwise from 12 o'clock
* standard position: angles are anti-clockwise from 3 o'clock
Python's quiver function expects angles to be in standard position, not relative to north, so if you are providing compass bearings, for the code to work correctly, you'll need to convert the angles. The following functionlet's you do this do this.
End of explanation
compassBearing_to_standardPosition__degrees_counterClockwise(bearing_deg=90)
Explanation: So if we give it 90°N (so East on a compass), we should get 0 in standard position.
End of explanation
def calculate_U_and_V__vector_magnitude_and_direction(angle_degrees, magnitude=1, correct_to_standard_position=True):
Calculates the components of a vector given in magnitude (U) and direction (V) form
angle: Expected that angles are in standard position (i.e. relative to the x axis or where 3 o'clock is zero and not the compass bearing where 12 o'clock is 0)
magnitude: defaults to 1
correct_to_standard_position: if True, converts angle_degrees to standard position using formula: (450 - bearing_deg) % 360 << this should only be used if you
provide angle_degrees elative to grid North e.g. where 90 degrees is East etc.
Help: https://www.khanacademy.org/math/precalculus/x9e81a4f98389efdf:vectors/x9e81a4f98389efdf:component-form/v/vector-components-from-magnitude-and-direction
if correct_to_standard_position:
angle_degrees = compassBearing_to_standardPosition__degrees_counterClockwise(angle_degrees)
angle_rad=np.deg2rad(angle_degrees)
x = magnitude * np.cos(angle_rad) # change in x == U
y = magnitude * np.sin(angle_rad) # change in y == V
return(x,y)
Explanation: Earlier, the start and end locations of the arrow or quiver were mentioned. We now need to condiser vector magnitude and direction.
End of explanation |
11,884 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Reading the data
Step1: Build clustering model
Here we build a kmeans model , and select the "optimal" of clusters.
Here we see that the optimal number of clusters is 2.
Step2: Build the optimal model and apply it
Step3: Cluster Profiles
Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases.
As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question).
The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars) | Python Code:
def loadContributions(file, withsexe=False):
contributions = pd.read_json(path_or_buf=file, orient="columns")
rows = [];
rindex = [];
for i in range(0, contributions.shape[0]):
row = {};
row['id'] = contributions['id'][i]
rindex.append(contributions['id'][i])
if (withsexe):
if (contributions['sexe'][i] == 'Homme'):
row['sexe'] = 0
else:
row['sexe'] = 1
for question in contributions['questions'][i]:
if (question.get('Reponse')): # and (question['texte'][0:5] != 'Savez') :
row[question['titreQuestion']+' : '+question['texte']] = 1
for criteres in question.get('Reponse'):
# print(criteres['critere'].keys())
row[question['titreQuestion']+'. (Réponse) '+question['texte']+' -> '+str(criteres['critere'].get('texte'))] = 1
rows.append(row)
df = pd.DataFrame(data=rows)
df.fillna(0, inplace=True)
return df
df = loadContributions('../data/EGALITE1.brut.json', True)
df.fillna(0, inplace=True)
df.index = df['id']
df.head()
Explanation: Reading the data
End of explanation
from sklearn.cluster import KMeans
from sklearn import metrics
import numpy as np
X = df.drop('id', axis=1).values
def train_kmeans(nb_clusters, X):
kmeans = KMeans(n_clusters=nb_clusters, random_state=0).fit(X)
return kmeans
#print(kmeans.predict(X))
#kmeans.cluster_centers_
def select_nb_clusters():
perfs = {};
for nbclust in range(2,10):
kmeans_model = train_kmeans(nbclust, X);
labels = kmeans_model.labels_
# from http://scikit-learn.org/stable/modules/clustering.html#calinski-harabaz-index
# we are in an unsupervised model. cannot get better!
# perfs[nbclust] = metrics.calinski_harabaz_score(X, labels);
perfs[nbclust] = metrics.silhouette_score(X, labels);
print(perfs);
return perfs;
df['clusterindex'] = train_kmeans(4, X).predict(X)
#df
perfs = select_nb_clusters();
# result :
# {2: 341.07570462155348, 3: 227.39963334619881, 4: 186.90438345452918, 5: 151.03979976346525, 6: 129.11214073405731, 7: 112.37235520885432, 8: 102.35994869157568, 9: 93.848315820675438}
optimal_nb_clusters = max(perfs, key=perfs.get);
print("optimal_nb_clusters" , optimal_nb_clusters);
Explanation: Build clustering model
Here we build a kmeans model , and select the "optimal" of clusters.
Here we see that the optimal number of clusters is 2.
End of explanation
km_model = train_kmeans(optimal_nb_clusters, X);
df['clusterindex'] = km_model.predict(X)
lGroupBy = df.groupby(['clusterindex']).mean();
# km_model.__dict__
cluster_profile_counts = df.groupby(['clusterindex']).count();
cluster_profile_means = df.groupby(['clusterindex']).mean();
global_counts = df.count()
global_means = df.mean()
cluster_profile_counts.head()
#cluster_profile_means.head()
#df.info()
df_profiles = pd.DataFrame();
nbclusters = cluster_profile_means.shape[0]
df_profiles['clusterindex'] = range(nbclusters)
for col in cluster_profile_means.columns:
if(col != "clusterindex"):
df_profiles[col] = np.zeros(nbclusters)
for cluster in range(nbclusters):
df_profiles[col][cluster] = cluster_profile_means[col][cluster]
# row.append(df[col].mean());
df_profiles.head()
#print(df_profiles.columns)
intereseting_columns = {};
for col in df_profiles.columns:
if(col != "clusterindex"):
global_mean = df[col].mean()
diff_means_global = abs(df_profiles[col] - global_mean). max();
# print(col , diff_means_global)
if(diff_means_global > 0.1):
intereseting_columns[col] = True
#print(intereseting_columns)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
Explanation: Build the optimal model and apply it
End of explanation
interesting = list(intereseting_columns.keys())
df_profiles_sorted = df_profiles[interesting].sort_index(axis=1)
df_profiles_sorted.plot.bar(figsize =(1, 1))
df_profiles_sorted.plot.bar(figsize =(16, 8), legend=False)
df_profiles_sorted.T
#df_profiles.sort_index(axis=1).T
Explanation: Cluster Profiles
Here, the optimal model ihas two clusters , cluster 0 with 399 cases, and 1 with 537 cases.
As this model is based on binary inputs. Given this, the best description of the clusters is by the distribution of zeros and ones of each input (question).
The figure below gives the cluster profiles of this model. Cluster 0 on the left. 1 on the right. The questions invloved as different (highest bars)
End of explanation |
11,885 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Advanced plotting
This tutorial will go over more advanced plotting functionality. Before reading this, you should take a look at the basic analysis and plotting tutorial. First, we'll load in some example data. This dataset is an egg comprised of 30 subjects, who each performed 8 study/test blocks of 16 words each.
Step1: Accuracy
Step2: By default, the analyze function will perform an analysis on each list separately, so when you plot the result, it will plot a separate bar for each list, averaged over all subjects
Step3: We can plot the accuracy for each subject by setting plot_type='subject', and we can change the name of the subject grouping variable by setting the subjname kwarg
Step4: Furthermore, we can add a title using the title kwarg, and change the y axis limits using ylim
Step5: In addition to bar plots, accuracy can be plotted as a violin or swarm plot by using the plot_style kwarg
Step6: We can also group the subjects. This is useful in cases where you might want to compare analysis results across multiple experiments. To do this we will reanalyze the data, averaging over lists within a subject, and then use the subjgroup kwarg to group the subjects into two sets
Step7: Oops, what happened there? By default, the plot function looks to the List column of the df to group the data. To group according to subject group, we must tell the plot function to plot by subjgroup. This can be achieved by setting plot_type='subject'
Step8: If you also have a list grouping (such as first 4 lists / second 4 lists), you can plot the interaction by setting plot_type='split'. This will create a plot with respect to both the subjgroup and listgroup
Step9: Like above, these plots can also be violin or swarm plots
Step10: Memory fingerprints
The Memory Fingerprint plotting works exactly the same as the the accuracy plots, with the except that plot_type='split' only works for the accuracy plots, and the default plot_style is a violinplot, instead of a barplot.
Step11: Other analyses
Like the plots above, spc, pfr and lagcrp plots can all be plotted according to listgroup or subjgroup by setting the plot_type kwarg.
Plot by list grouping
Step12: Plot by subject grouping | Python Code:
import quail
%matplotlib inline
egg = quail.load_example_data()
Explanation: Advanced plotting
This tutorial will go over more advanced plotting functionality. Before reading this, you should take a look at the basic analysis and plotting tutorial. First, we'll load in some example data. This dataset is an egg comprised of 30 subjects, who each performed 8 study/test blocks of 16 words each.
End of explanation
accuracy = egg.analyze('accuracy')
accuracy.get_data().head()
Explanation: Accuracy
End of explanation
ax = accuracy.plot()
Explanation: By default, the analyze function will perform an analysis on each list separately, so when you plot the result, it will plot a separate bar for each list, averaged over all subjects:
End of explanation
ax = accuracy.plot(plot_type='subject', subjname='Subject Number')
Explanation: We can plot the accuracy for each subject by setting plot_type='subject', and we can change the name of the subject grouping variable by setting the subjname kwarg:
End of explanation
ax = accuracy.plot(plot_type='subject', subjname='Subject Number',
title='Accuracy by Subject', ylim=[0,1])
Explanation: Furthermore, we can add a title using the title kwarg, and change the y axis limits using ylim:
End of explanation
ax = accuracy.plot(plot_type='subject', subjname='Subject Number',
title='Accuracy by Subject', ylim=[0,1], plot_style='violin')
ax = accuracy.plot(plot_type='subject', subjname='Subject Number',
title='Accuracy by Subject', ylim=[0,1], plot_style='swarm')
Explanation: In addition to bar plots, accuracy can be plotted as a violin or swarm plot by using the plot_style kwarg:
End of explanation
accuracy = egg.analyze('accuracy', listgroup=['average']*8)
accuracy.get_data().head()
ax = accuracy.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15)
Explanation: We can also group the subjects. This is useful in cases where you might want to compare analysis results across multiple experiments. To do this we will reanalyze the data, averaging over lists within a subject, and then use the subjgroup kwarg to group the subjects into two sets:
End of explanation
ax = accuracy.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15, plot_type='subject')
Explanation: Oops, what happened there? By default, the plot function looks to the List column of the df to group the data. To group according to subject group, we must tell the plot function to plot by subjgroup. This can be achieved by setting plot_type='subject':
End of explanation
accuracy = egg.analyze('accuracy', listgroup=['First 4 Lists']*4+['Second 4 Lists']*4)
ax = accuracy.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15, plot_type='split')
Explanation: If you also have a list grouping (such as first 4 lists / second 4 lists), you can plot the interaction by setting plot_type='split'. This will create a plot with respect to both the subjgroup and listgroup:
End of explanation
ax = accuracy.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15, plot_type='split', plot_style='violin')
ax = accuracy.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15, plot_type='split', plot_style='swarm')
Explanation: Like above, these plots can also be violin or swarm plots:
End of explanation
fingerprint = egg.analyze('fingerprint', listgroup=['First 4 Lists']*4+['Second 4 Lists']*4)
ax = fingerprint.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15, plot_type='subject')
ax = fingerprint.plot(subjgroup=['Experiment 1']*15+['Experiment 2']*15, plot_type='list')
Explanation: Memory fingerprints
The Memory Fingerprint plotting works exactly the same as the the accuracy plots, with the except that plot_type='split' only works for the accuracy plots, and the default plot_style is a violinplot, instead of a barplot.
End of explanation
listgroup = ['First 4 Lists']*4+['Second 4 Lists']*4
plot_type = 'list'
spc = egg.analyze('spc', listgroup=listgroup)
ax = spc.plot(plot_type=plot_type, ylim=[0, 1])
pfr = egg.analyze('pfr', listgroup=listgroup)
ax = pfr.plot(plot_type=plot_type)
lagcrp = egg.analyze('lagcrp', listgroup=listgroup)
ax = lagcrp.plot(plot_type=plot_type)
Explanation: Other analyses
Like the plots above, spc, pfr and lagcrp plots can all be plotted according to listgroup or subjgroup by setting the plot_type kwarg.
Plot by list grouping
End of explanation
listgroup=['average']*8
subjgroup = ['Experiment 1']*15+['Experiment 2']*15
plot_type = 'subject'
spc = egg.analyze('spc', listgroup=listgroup)
ax = spc.plot(subjgroup=subjgroup, plot_type=plot_type, ylim=[0,1])
pfr = egg.analyze('pfr', listgroup=listgroup)
ax = pfr.plot(subjgroup=subjgroup, plot_type=plot_type)
lagcrp = egg.analyze('lagcrp', listgroup=listgroup)
ax = lagcrp.plot(subjgroup=subjgroup, plot_type=plot_type)
Explanation: Plot by subject grouping
End of explanation |
11,886 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Map, Filter, and Reduce Functions
https
Step2: filter function is used to select certain pieces of data from a list/tuple or other collection of data.
Step3: reduce | Python Code:
# 计算一系列半径的圆的面积
import math
# 计算面积
def area(r):
area of a circle with radius 'r'.
return math.pi * (r**2)
# 半径
radii = [2, 5, 7,1 ,0.3, 10]
# method 1
areas = []
for r in radii:
a = area(r)
areas.append(a)
areas
# method 2
[area(r) for r in radii]
# method 3, with map function, map take 2 arguments, first is function, second is list/tuple or other iterable object
map(area, radii)
list(map(area, radii))
# more examples
temps = [("Berlin", 29), ("Cairo", 36), ("Buenos Aires", 19), ("Los Angeles", 26), ("Tokyo", 27), ("New York", 28), ("London", 22), ("Bejing", 32)]
c_to_f = lambda data: (data[0], (9/5)*data[1] + 32)
list(map(c_to_f, temps))
Explanation: Map, Filter, and Reduce Functions
https://www.youtube.com/watch?v=hUes6y2b--0
End of explanation
# let's select all data that above the mean
import statistics
data = [1.3, 2.7, 0.8, 4.1, 4.3, -0.1]
avg = statistics.mean(data);avg
# like map. filter 1st take a function, 2nd take a list/tuple...
filter(lambda x: x>avg, data)
list(filter(lambda x: x>avg, data))
list(filter(lambda x: x<avg, data))
# remove empty data
countries = ["", "China", "USA", "Chile", "", "", "Brazil"]
list(filter(None, countries))
Explanation: filter function is used to select certain pieces of data from a list/tuple or other collection of data.
End of explanation
# data: [a1, a2, a3, ....., an]
# function: f(x, y)
# reduce(f, data):
# step 1: val1 = f(a1, a2)
# step 2: val2 = f(val1, a3)
# step 3: val3 = f(val2, a4)
# ......
# step n-1: valn-1= f(valn-2, an)
# return valn-1
from functools import reduce
# multiply all numbers in a list
data = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29]
multiplier = lambda x, y: x*y
reduce(multiplier, data)
# use for loop instead
product = 1
for i in data:
product = product * i
product
# sum
sum(data)
# use reduce tosum
reduce(lambda x, y:x+y, data)
Explanation: reduce
End of explanation |
11,887 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interact Exercise 3
Imports
Step2: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution
Step3: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays
Step4: Compute a 2d NumPy array called phi
Step6: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
Step7: Use interact to animate the plot_soliton_data function versus time. | Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 3
Imports
End of explanation
def soliton(x, t, c, a):
#Return phi(x, t) for a soliton wave with constants c and a.
u = 0.5*c*(np.cosh((0.5*np.sqrt(c)*(x-c*t-a))))**-2
return u
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))
Explanation: Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.
End of explanation
tmin = 0.0
tmax = 10.0
tpoints = 100
t = np.linspace(tmin, tmax, tpoints)
xmin = 0.0
xmax = 10.0
xpoints = 200
x = np.linspace(xmin, xmax, xpoints)
c = 1.0
a = 0.0
Explanation: To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:
End of explanation
phi = np.zeros((xpoints, tpoints))
for i in range(xpoints):
phi[i,:] = soliton(x[i], t, c, a)
#Another way:
#phi = np.empty((xpoints, tpoints))
assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a)
Explanation: Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$.
End of explanation
def plot_soliton_data(i=0):
Plot the soliton data at t[i] versus x.
plt.plot(x, phi[:,i])
plt.xlabel('x')
plt.ylabel('Phi (x,t)')
plt.grid(True)
plt.title('The Awesome Soliton Wave')
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function
Explanation: Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful.
End of explanation
interact(plot_soliton_data, i = (0, tpoints-1)); # I got help during class yaaaay
assert True # leave this for grading the interact with plot_soliton_data cell
Explanation: Use interact to animate the plot_soliton_data function versus time.
End of explanation |
11,888 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Grade
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output
Step5: Problem set #2
Step6: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step7: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step8: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step9: EXTREME BONUS ROUND
Step10: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint
Step11: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint
Step12: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step13: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step14: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step15: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output | Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
Explanation: Grade: 10 / 11
Homework #4
These problem sets focus on list comprehensions, string operations and regular expressions.
Problem set #1: List slices and list comprehensions
Let's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:
End of explanation
# TA-COMMENT: You commented out the answer!
raw_data = numbers_str.split(",")
numbers = []
for i in raw_data:
numbers.append(int(i))
numbers
#max(numbers)
Explanation: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
End of explanation
sorted(numbers)[11:]
Explanation: Great! We'll be using the numbers list you created above in the next few problems.
In the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:
[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]
(Hint: use a slice.)
End of explanation
# TA-COMMENT: (-1) This isn't sorted -- it doesn't match Allison's expected output.
[i for i in numbers if i % 3 == 0]
# TA-COMMET: This would have been an acceptable answer.
[i for i in sorted(numbers) if i % 3 == 0]
Explanation: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:
[120, 171, 258, 279, 528, 699, 804, 855]
End of explanation
from math import sqrt
[sqrt(i) for i in numbers if i < 100]
Explanation: Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:
[2.6457513110645907, 8.06225774829855, 8.246211251235321]
(These outputs might vary slightly depending on your platform.)
End of explanation
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
Explanation: Problem set #2: Still more list comprehensions
Still looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.
End of explanation
earth_diameter = [i['diameter'] for i in planets if i['name'] == "Earth"]
earth = int(earth_diameter[0])
[i['name'] for i in planets if i['diameter'] > 4 * earth]
Explanation: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output:
['Jupiter', 'Saturn', 'Uranus']
End of explanation
#count = 0
#for i in planets:
#count = count + i['mass']
#print(count)
sum([i['mass'] for i in planets])
Explanation: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79
End of explanation
[i['name'] for i in planets if "giant" in i['type']]
Explanation: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:
['Jupiter', 'Saturn', 'Uranus', 'Neptune']
End of explanation
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
Explanation: EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:
['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']
Problem set #3: Regular expressions
In the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.
End of explanation
# TA-COMMENT: A better way of writing this regular expression: r"\b\w{4}\b \b\w{4}\b"
[line for line in poem_lines if re.search(r"\b\w\w\w\w\b \b\w\w\w\w\b", line)]
Explanation: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
In the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \b anchor. Don't overthink the "two words in a row" requirement.)
Expected result:
['Then took the other, as just as fair,',
'Had worn them really about the same,',
'And both that morning equally lay',
'I doubted if I should ever come back.',
'I shall be telling this with a sigh']
End of explanation
[line for line in poem_lines if re.search(r"\b\w{5}[^0-9a-zA-Z]?$", line)]
Explanation: Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:
['And be one traveler, long I stood',
'And looked down one as far as I could',
'And having perhaps the better claim,',
'Though as for that the passing there',
'In leaves no step had trodden black.',
'Somewhere ages and ages hence:']
End of explanation
all_lines = " ".join(poem_lines)
Explanation: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
End of explanation
re.findall(r"I (\b\w+\b)", all_lines)
#re.findall(r"New York (\b\w+\b)", all_subjects)
Explanation: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:
['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']
End of explanation
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
Explanation: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
End of explanation
# TA-COMMENT: Note that 'price' should contain floats, not strings!
menu = []
for item in entrees:
menu_items = {}
match = re.search(r"^(.*) \$(\d{1,2}\.\d{2})", item)
#print("name",match.group(1))
#print("price", match.group(2))
#menu_items.update({'name': match.group(1), 'price': match.group(2)})
if re.search("v$", item):
menu_items.update({'name': match.group(1), 'price': match.group(2), 'vegetarian': True})
else:
menu_items.update({'name': match.group(1), 'price': match.group(2),'vegetarian': False})
menu_items
menu.append(menu_items)
menu
Explanation: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
Expected output:
[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',
'price': 10.95,
'vegetarian': False},
{'name': 'Lavender and Pepperoni Sandwich ',
'price': 8.49,
'vegetarian': False},
{'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',
'price': 12.95,
'vegetarian': True},
{'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',
'price': 9.95,
'vegetarian': True},
{'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',
'price': 19.95,
'vegetarian': False},
{'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]
End of explanation |
11,889 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
PP2P protocol
By Roman Sasik ([email protected])
This Notebook describes the steps used in Gene Ontology analysis, which produces both conditional and unconditional posterior probabilities that a GO term is differentially regulated in a given experiment. It is assumed that posterior probabilities for all genes have been calculated, either directly using eBayes in the limma R package, or indirectly using the lfdr function of the qvalue R package. The conditional probabilities are defined in the context of the GO graph structure
Step1: Input file
There is one input file - a tab-delimited list of genes, which must meet these requirements
Step2: The output is a number of files
Step3: Demultiplexing phenotyping reads
The graph looks like this | Python Code:
!gfortran -ffree-line-length-none PP2p_branch_conditional_exact_pvalues.f90
!ls
Explanation: PP2P protocol
By Roman Sasik ([email protected])
This Notebook describes the steps used in Gene Ontology analysis, which produces both conditional and unconditional posterior probabilities that a GO term is differentially regulated in a given experiment. It is assumed that posterior probabilities for all genes have been calculated, either directly using eBayes in the limma R package, or indirectly using the lfdr function of the qvalue R package. The conditional probabilities are defined in the context of the GO graph structure:
<img src = "files/GOstructure.png">
The node in red can be pronounced conditionally significant if it is significant given the status of its descendant nodes. For instance, if the dark grey node had been found significant and the light grey nodes had been found not significant, the red node can be declared significant only if there are more significant genes in it than in the dark grey node.
The program PP2P works for both "continuous" PP's as well as for a simple cutoff, which is equivalent to the usual two-gene-list GO analysis (one a significant gene list, the other the expressed gene list).
The algorithm is described in this paper:
"Posterior conditional probabilities for gene ontologies", R Sasik and A Chang, (to be published)
GNU fortran compiler gfortran is assumed to be installed (is part of gcc).
Compilation of the code
Execute this command:
End of explanation
!./a.out BP C_T_PP.txt 0.01
Explanation: Input file
There is one input file - a tab-delimited list of genes, which must meet these requirements: 1) The first line is the header line 2) The first column contains Entrez gene ID's of all expressed genes, in no particular order. Each gene ID must be present only once. 3) The second column contains posterior probabilities (PP) of the genes in the first column. PP is the probability that, given some prior assumptions, the gene is differentially expressed (DE). An example of such a file is C_T_PP.txt. The genes in it are ordered by their PP but they don't have to be. This is the top of that file:
<img src = "files/input.png">
There are times when we do not have the PP's, but instead have a "list of DE genes." In that case, define PP's in the second column as 1 when the corresponding gene is among the significant genes and 0 otherwise. An example of such a file is C_T_1652.txt. (The 1652 in the file name indicates the number of significant genes, but it has no other significance).
Running PP2p
Enter this command if you want to find differentially regulated GO terms in the Biological Process ontology, in the experiment defined by the input file C_T_PP.txt, and if you want a term reported as significant with posterior error probability of 0.01:
End of explanation
!dot -Tfig BP_C_T_PP_0.01_conditional.dot > BP_C_T_PP_0.01_conditional.fig
!fig2dev -L pdf BP_C_T_PP_0.01_conditional.fig BP_C_T_PP_0.01_conditional.pdf
!ls *pdf
Explanation: The output is a number of files:
Conditional reporting is done in these files:
BP_C_T_PP_0.01_conditional.dot
BP_C_T_PP_0.01_conditional_lfdr_expanded.txt
BP_C_T_PP_0.01_conditional_lfdr.txt
Unconditional reporting is done in these files (BH indicates Benjamini-Hochberg adjustment of raw p-values; lfdr indicates local false discovery rate (Storey) corresponding to the raw p-values):
BP_C_T_PP_0.01_unconditional_BH_expanded.txt
BP_C_T_PP_0.01_unconditional_BH.txt
BP_C_T_PP_0.01_unconditional.dot
BP_C_T_PP_0.01_unconditional_lfdr_expanded.txt
BP_C_T_PP_0.01_unconditional_lfdr.txt
For instance, the simple list of conditionally significant GO terms is in BP_C_T_PP_0.01_conditional_lfdr.txt and looks like this:
<img src = "files/xls_conditional.png">
This is the entire file. There are no more conditionally significant GO terms. The way to read this output is from top to bottom, as GO terms are reported in levels depending on the significance (or not) of their child terms. Therefore, the "level" column also corresponds to the level of the GO organization - the lower the level, the more specific (and smaller) the term is.
The expanded files contain all the genes from the reported GO terms. For instance, the top of BP_C_T_PP_0.01_conditional_lfdr_expanded.txt looks like this:
<img src = "files/xls_conditional_expanded.png">
The .dot files encode the ontology structure of the significant terms. Convert them into pdf files using the following commands:
End of explanation
!rm BP_C_T*
Explanation: Demultiplexing phenotyping reads
The graph looks like this:
<img src = "files/dot_conditional.png">
Cleanup after exercize:
End of explanation |
11,890 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div align="right">Python 3.6 Jupyter Notebook</div>
Introduction to Bandicoot
Your completion of the notebook exercises will be graded based on your ability to do the following
Step1: 1. Data set review
Some of the relevant information pertaining to the “Friends and Family” data set is repeated here, as you will be focusing on the content of the data in this module's notebooks.
An experiment was designed in 2011 to study how people make decisions (with emphasis on the social aspects involved) and how people can be empowered to make better decisions using personal and social tools. The data set was collected by Nadav Aharony, Wei Pan, Cory Ip, Inas Khayal, and Alex Pentland. More details about this data set are available through the MIT Reality Commons resource.
The subjects are members of a young-family, residential, living community adjacent to a major research university in North America. All members of the community are couples, and at least one of the members is affiliated with the university. The community comprises over 400 residents, approximately half of whom have children. A pilot phase of 55 participants was launched in March 2010. In September 2010, phase two of the study included 130 participants – approximately 64 families. Participants were selected out of approximately 200 applicants in a way that would achieve a representative sample of the community and sub-communities (Aharony et al. 2011).
In this module, you will prepare and analyze the data in a social context, using tools and methods introduced in previous modules.
2. Calculating summary statistics
To better understand the process of creating features (referred to as behavioral indicators in Bandicoot), you will start by manually creating a feature. Creating features is a tedious process. Using libraries that are tailored for specific domains (such as Bandicoot) can significantly reduce the time required to generate features that you would use as input in machine-learning algorithms. It is important for you to both understand the process and ensure that the results produced by the libraries are as expected. In other words, you need to make sure that you use the correct function from the library to produce the expected results.
2.1 Average weekly call duration
In the first demonstration of automated analysis using Bandicoot, you will evaluate the average weekly call duration for a specific user, based on the interaction log file.
2.1.1 Data preparation
First, review the content of the text file containing the records using the bash (command line) command, "head". This function has been demonstrated in earlier notebooks, and it is extremely useful when you need to get a quick view of a data set without loading it into your analysis environment. Should the contents prove useful, you can load it as a DataFrame.
Step2: The first three lines of the file are displayed. They contain a header row, as well as two data rows.
Next, load the data set using the Pandas "read_csv" function, and use the "datetime" field as the DataFrame index. This example only focuses on calls. You will create a new DataFrame containing only call data by filtering by the type of interaction.
Step3: The "correspondent_id" field contains the user ID for the other party involved in a specific call interaction. Each "correpondent_id" is encoded in one of two formats
Step4: Add a column that returns a Boolean indicating whether the value of "correspondent_id" is a hexadecimal or not, using the function defined above. This column can be used to filter interactions that only involve those users in the study population.
Step5: 2.1.2 Calculating the weekly average call duration
Performing the calculation is a two-step process
Step6: Now that you have the weekly averages and standard deviation of the call duration, you can compute the mean weekly call duration and the mean weekly call duration standard deviation.
Step7: It is possible to use generic data analysis libraries (such as Pandas) that were introduced to you in earlier modules. However, in the next section of this notebook, you will return to a library briefly introduced to you in Module 2 of this course, namely Bandicoot.
Step8: 2.2 Using Bandicoot
Bandicoot is an open-source Python toolbox used to analyze mobile phone metadata. You can perform actions – similar to your manual steps – with a single command, using this library.
The manual analysis of data sets can be an extremely tedious and resource-intensive process. Although it is outside of the scope of this course, it is important to start considering memory utilization, reproducibility of results, and the reuse of intermediate steps when working with large data sets. Toolboxes, such as Bandicoot, are optimized for improved performance, and specific to mobile phone metadata analysis, meaning that the functions available are specific to the type of data to be analyzed.
Please review the Bandicoot reference manual for details on functions, modules, and objects included in Bandicoot. Bandicoot has been preinstalled on your virtual analysis environment. Revisit the Bandicoot quick guide should you wish to set up this library in another environment.
In the following example, you will redo the analysis from the previous section, and work on additional examples of data manipulation using Bandicoot.
Load the data
This example starts with using the Bandicoot “import” function to load the input files. Note that the “import” function expects data in a specific format, and it provides additional information that allows you to better understand your data set.
Step9: Note
Step10: You can see that the results (above) are in line with the manual calculation (which was rounded to five decimals) that you performed earlier.
By default, Bandicoot computes indicators on a weekly basis, and returns the average (mean) over all of the weeks available, and the standard deviation (std), in a nested dictionary. You can read more about the creation of indicators in the Bandicoot documentation.
The screenshot below demonstrates the format of the output produced.
To change the default behavior, and review the daily resolution, you can use “groupby”, in Bandicoot, as a method call argument. Other grouping parameters include "month", "year", and “None”.
Now, change the grouping resolution to "day", in the following call, and display a summary of additional statistics by including a parameter for the summary argument.
Step11: Note
Step12: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete
Step13: The argument "split_day" is now demonstrated (below) to allow you to view all available strata.
Step14: Note
Step15: Note
Step16: Note
Step17: Note
Step18: <br>
<div class="alert alert-info">
<b>Exercise 2 End.</b>
</div>
Exercise complete
Step19: Image displaying sample output
Step20: The Bandicoot "read_csv()" function loads the data, provides summary information, and removes the records that are not of interest in the analysis. Typically, performing the data cleansing steps is time-consuming, and prone to error or inconsistencies.
The graph data is stored as an adjacency matrix.
Note
Step21: There are several types of adjacency matrices available in Bandicoot, including the following
Step22: 4.3 Gender assortativity
This indicator computes the assortativity of nominal attributes. More specifically, it measures the similarity of the current user to their correspondents for all Bandicoot indicators. For each one, it calculates the variance of the current user’s value with the values for all of their correspondents. This indicator measures the homophily of the current user with their correspondents, for each attribute. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value), which indicates the percentage of contacts sharing the same value.
Let's demonstrate this by reviewing the gender assortativity.
Step23: <br>
<div class="alert alert-info">
<b>Exercise 3 Start.</b>
</div>
Instructions
In the previous example, you obtained a value of 0.714 or 71.4% for gender assortativity. Random behavior would typically deliver a value centered around 50%, if you have enough data points.
Question
Step24: Create directed unweighted and undirected weighted graphs to visualize network, in order to better understand the user behavior (as per the examples in Section 1.2 of Module 4’s Notebook 2).
Step25: 4.4.1 Plot the directed unweighted graph
This can typically be utilized to better understand the flow or spread of information in a network.
Step26: 4.4.2 Plot the undirected weighted graph
This can typically be utilized to better understand the importance of the various individuals and their interactions in the network.
Step27: Note
Step28: 5.2 Error example
Step29: Review the errors below to quickly get a view of your data set. This example includes warnings that are in addition to the missing location and antenna warnings explained earlier. The new warnings include
Step30: 5.2.2 Duplicated records
These records are retained by default, but you can change this behavior by adding the parameter “drop_duplicates=True” when loading files.
Warning
Step31: <br>
<div class="alert alert-info">
<b>Exercise 4 Start.</b>
</div>
Instructions
When working with data of any size or volume, data error handling can be a complex task.
1. List three important topics to consider when performing data error handling.
2. Provide a short description of your view of the topics to consider. Your answer should be one or two sentences in length, and can be based on an insight that you reached while completing the course material or from previous experience.
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 4 End.</b>
</div>
Exercise complete
Step32: 6.1 Load the files and create a metric
Review the Bandicoot "utils.all" page for more detail.
Step33: 6.2 Save the interactions in a file for future use
Note
Step34: Before moving on, take a quick look at the results of the pipeline.
6.2.1 Review the data for the first user
Keep in mind that, in manual approaches, you would likely have to create each of these features by hand. The process entails thinking about features, and reviewing available literature to identify applicable features. These features are used in machine learning techniques (including feature selection) for various use cases.
Note | Python Code:
import os
import pandas as pd
import bandicoot as bc
import numpy as np
import matplotlib
Explanation: <div align="right">Python 3.6 Jupyter Notebook</div>
Introduction to Bandicoot
Your completion of the notebook exercises will be graded based on your ability to do the following:
Understand: Do your pseudo-code and comments show evidence that you recall and understand technical concepts?
Apply: Are you able to execute code (using the supplied examples) that performs the required functionality on supplied or generated data sets?
Evaluate: Are you able to interpret the results and justify your interpretation based on the observed data?
Notebook objectives
By the end of this notebook, you will be expected to:
Understand the use of Bandicoot in automating the analysis of mobile phone data records; and
Understand data error handling.
List of exercises
Exercise 1: Calculating the number of call contacts.
Exercise 2: Determining average day and night weekly call activity rates.
Exercise 3: Interpreting gender assortativity values.
Exercise 4: Handling data errors.
Notebook introduction
This course started by introducing you to tools and techniques that can be applied in analyzing data. This notebook briefly revisits the “Friends and Family” data set (for context purposes), before demonstrating how to generate summary statistics manually, and through Bandicoot. Subsequent sections briefly demonstrate Bandicoot's visualization capabilities, how to use Bandicoot in combination with network and graph content (introduced in Module 4), error handling, and loading files from a directory.
<div class="alert alert-warning">
<b>Note</b>:<br>
It is strongly recommended that you save and checkpoint after applying significant changes or completing exercises. This allows you to return the notebook to a previous state should you wish to do so. On the Jupyter menu, select "File", then "Save and Checkpoint" from the dropdown menu that appears.
</div>
Load libraries
End of explanation
# Retrieve the first three rows from the "clean_records" data set.
!head -n 3 ../data/bandicoot/clean_records/fa10-01-08.csv
Explanation: 1. Data set review
Some of the relevant information pertaining to the “Friends and Family” data set is repeated here, as you will be focusing on the content of the data in this module's notebooks.
An experiment was designed in 2011 to study how people make decisions (with emphasis on the social aspects involved) and how people can be empowered to make better decisions using personal and social tools. The data set was collected by Nadav Aharony, Wei Pan, Cory Ip, Inas Khayal, and Alex Pentland. More details about this data set are available through the MIT Reality Commons resource.
The subjects are members of a young-family, residential, living community adjacent to a major research university in North America. All members of the community are couples, and at least one of the members is affiliated with the university. The community comprises over 400 residents, approximately half of whom have children. A pilot phase of 55 participants was launched in March 2010. In September 2010, phase two of the study included 130 participants – approximately 64 families. Participants were selected out of approximately 200 applicants in a way that would achieve a representative sample of the community and sub-communities (Aharony et al. 2011).
In this module, you will prepare and analyze the data in a social context, using tools and methods introduced in previous modules.
2. Calculating summary statistics
To better understand the process of creating features (referred to as behavioral indicators in Bandicoot), you will start by manually creating a feature. Creating features is a tedious process. Using libraries that are tailored for specific domains (such as Bandicoot) can significantly reduce the time required to generate features that you would use as input in machine-learning algorithms. It is important for you to both understand the process and ensure that the results produced by the libraries are as expected. In other words, you need to make sure that you use the correct function from the library to produce the expected results.
2.1 Average weekly call duration
In the first demonstration of automated analysis using Bandicoot, you will evaluate the average weekly call duration for a specific user, based on the interaction log file.
2.1.1 Data preparation
First, review the content of the text file containing the records using the bash (command line) command, "head". This function has been demonstrated in earlier notebooks, and it is extremely useful when you need to get a quick view of a data set without loading it into your analysis environment. Should the contents prove useful, you can load it as a DataFrame.
End of explanation
# Specify the user for review.
user_id = 'sp10-01-08'
# Load the data set and set the index.
interactions = pd.read_csv('../data/bandicoot/clean_records/' + user_id + '.csv')
interactions.set_index(pd.DatetimeIndex(interactions['datetime']), inplace=True)
# Extract the calls.
calls = interactions[interactions.interaction == 'call'].copy()
# Display the head of the new calls dataframe.
calls.head(3)
Explanation: The first three lines of the file are displayed. They contain a header row, as well as two data rows.
Next, load the data set using the Pandas "read_csv" function, and use the "datetime" field as the DataFrame index. This example only focuses on calls. You will create a new DataFrame containing only call data by filtering by the type of interaction.
End of explanation
def is_hex(s):
'''
Check if a string is hexadecimal.
'''
try:
int(s, 16)
return True
except ValueError:
return False
Explanation: The "correspondent_id" field contains the user ID for the other party involved in a specific call interaction. Each "correpondent_id" is encoded in one of two formats:
1. A hexadecimal integer that indicates the corresponding party did not form part of the study.
2. A non-hexadecimal (string) data type for a party within the study group.
The provided function below, "is_hex", checks if a string is hexadecimal or not.
End of explanation
calls['is_hex_correspondent_id'] = calls.correspondent_id.apply(lambda x: is_hex(x)).values
calls.head()
Explanation: Add a column that returns a Boolean indicating whether the value of "correspondent_id" is a hexadecimal or not, using the function defined above. This column can be used to filter interactions that only involve those users in the study population.
End of explanation
# Add a field that contains the week number corresponding to a call record.
calls['week'] = calls.index.week
# Get the mean and population(ddof=0) standard deviation of each grouping.
weekly_averages = calls.groupby('week')['call_duration'].agg([np.mean, lambda x: np.std(x, ddof=0)])
# Give the columns names that are intuitive.
weekly_averages.columns = ['mean_duration', 'std_duration']
# Review the data.
weekly_averages.head()
# Retrieve the bins (weeks).
list(weekly_averages.index)
Explanation: 2.1.2 Calculating the weekly average call duration
Performing the calculation is a two-step process:
1. Attribute each call that has the value for the week the interaction occurred to the variable "week".
Note: This is possible, in this case, because the data range is within a specific year. Otherwise, you would have attributed the call to both the year and the week the interaction occurred.
2. Use the Pandas "pd.group_by()" method (demonstrated in Module 2) to bin the data on the basis of the week of interaction.
End of explanation
print ("The average weekly call duration for the user is {:.3f}, while the average weekly standard deviation is {:.3f}."
.format(weekly_averages.mean_duration.mean(), weekly_averages.std_duration.mean()))
Explanation: Now that you have the weekly averages and standard deviation of the call duration, you can compute the mean weekly call duration and the mean weekly call duration standard deviation.
End of explanation
weekly_averages.describe()
Explanation: It is possible to use generic data analysis libraries (such as Pandas) that were introduced to you in earlier modules. However, in the next section of this notebook, you will return to a library briefly introduced to you in Module 2 of this course, namely Bandicoot.
End of explanation
B = bc.read_csv(user_id, '../data/bandicoot/clean_records/', '../data/bandicoot/antennas.csv')
Explanation: 2.2 Using Bandicoot
Bandicoot is an open-source Python toolbox used to analyze mobile phone metadata. You can perform actions – similar to your manual steps – with a single command, using this library.
The manual analysis of data sets can be an extremely tedious and resource-intensive process. Although it is outside of the scope of this course, it is important to start considering memory utilization, reproducibility of results, and the reuse of intermediate steps when working with large data sets. Toolboxes, such as Bandicoot, are optimized for improved performance, and specific to mobile phone metadata analysis, meaning that the functions available are specific to the type of data to be analyzed.
Please review the Bandicoot reference manual for details on functions, modules, and objects included in Bandicoot. Bandicoot has been preinstalled on your virtual analysis environment. Revisit the Bandicoot quick guide should you wish to set up this library in another environment.
In the following example, you will redo the analysis from the previous section, and work on additional examples of data manipulation using Bandicoot.
Load the data
This example starts with using the Bandicoot “import” function to load the input files. Note that the “import” function expects data in a specific format, and it provides additional information that allows you to better understand your data set.
End of explanation
# Calculate the call_duration summary statistics using Bandicoot.
bc.individual.call_duration(B)
Explanation: Note:
WARNING:root:100.00% of the records are missing a location.
This message indicates that our data set does not include any antenna IDs. This column was removed from the DataFrame in order to preserve user privacy. A research study on the privacy bounds of human mobility indicated that knowing four spatio-temporal points (approximate places and times of an individual) is enough to re-identify an individual in an anonymized data set in 95% of the cases.
2.2.1 Compute the weekly average call duration
In Bandicoot, you can achieve the same result demonstrated earlier with a single method call named "call_duration".
End of explanation
bc.individual.call_duration(B, groupby='day', interaction='call', summary='extended')
Explanation: You can see that the results (above) are in line with the manual calculation (which was rounded to five decimals) that you performed earlier.
By default, Bandicoot computes indicators on a weekly basis, and returns the average (mean) over all of the weeks available, and the standard deviation (std), in a nested dictionary. You can read more about the creation of indicators in the Bandicoot documentation.
The screenshot below demonstrates the format of the output produced.
To change the default behavior, and review the daily resolution, you can use “groupby”, in Bandicoot, as a method call argument. Other grouping parameters include "month", "year", and “None”.
Now, change the grouping resolution to "day", in the following call, and display a summary of additional statistics by including a parameter for the summary argument.
End of explanation
# Your code here.
Explanation: Note:
You will notice that you can switch between groupings by day, week, or month with ease. This is one of the advantages referred to earlier. In cases where you manually analyze the data, you would have had to manually create these features, or utilize much more resource-intensive parsing functions in order to achieve similar results. You can choose to include all options or change to a new grouping with minimal changes required from your side, and no additional functions needing to be created.
<br>
<div class="alert alert-info">
<b>Exercise 1 Start.</b>
</div>
Instructions
Compute the average number of call contacts for data set, B, grouped by:
Month; and
Week.
Hint: You can review the help file for the "number_of_contacts" function to get started.
End of explanation
# Use bandicoot to split the records by day.
bc.individual.number_of_interactions(B, groupby='day', split_day=True, interaction='call')
# Plot the results. The mean is plotted as a barplot, with the std deviation as an error bar.
%matplotlib inline
interactions_split_by_day = bc.individual.number_of_interactions(B, groupby='day', split_day=True, interaction='call')
interactions_split = []
for period, values in interactions_split_by_day['allweek'].items():
interactions_split.append([period, values['call']['mean'], values['call']['std']])
interactions_split = pd.DataFrame(interactions_split,columns=['period', 'mean','std'])
interactions_split[['period', 'mean']].plot(kind='bar' , x='period', title='Daily vs nightly interactions',
yerr=interactions_split['std'].values, )
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 1 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
2.2.2 Splitting records
Regardless of the grouping time resolution, it is often useful to stratify the data between weekday and weekend, or day and night. Bandicoot allows you to achieve this with its Boolean split arguments, "split_week" and "split_day". You can read more about Bandicoot’s "number_of_interactions", and then execute the code below to view the data on the daily number of interactions stratified with "split_week".
Note:
This strategy is employed to generate features to be processed by machine learning algorithms, where the algorithms can identify behavior which is not visible at small scale. In 2015, a study, titled "Predicting Gender from Mobile Phone Metadata" (presented at the Netmob Conference, Cambridge), showed that the most predictive feature for men in a South Asian country is the "percent of calls initiated by the person during weekend nights", while the most predictive feature for men in the European Union is "the maximum text response delay during the week" (Jahani et al. 2015).
<img src="Gender_features.png" alt="Drawing" style="width: 500px;"/>
End of explanation
bc.individual.number_of_interactions(B, groupby='day', split_week=True, split_day=True, interaction='call')
Explanation: The argument "split_day" is now demonstrated (below) to allow you to view all available strata.
End of explanation
# Active days.
bc.individual.active_days(B)
Explanation: Note:
The number of interactions is higher for “day” compared to “night”, as well as for “weekday” compared to “weekend”.
2.2.3 Other indicators
Machine learning algorithms use features for prediction and clustering tasks. Difficulty arises when manually generating these features. However, using custom libraries (such as Bandicoot) to generate them on your behalf can significantly speed up and standardize the process. In earlier modules, you performed manual checks on data quality. Experience will teach you that this step always takes longer than anticipated, and requires significant effort to determine the relevant questions, and then to execute them. Using a standardized library such as Bandicoot saves time in analyzing data sets, and spotting data quality issues, and makes the actions repeatable or comparable with other data sets or analyses.
Two additional features are demonstrated here. You can refer to the Bandicoot reference material for additional available features.
Active days (days with at least one interaction)
End of explanation
# Number of contacts.
bc.individual.number_of_contacts(B, split_week=True)
Explanation: Note:
Remember that Bandicoot defaults to grouping by week, if the grouping is not explicitly specified.
Number of contacts
This number can be interesting, as some research suggests that it is predictable for humans, and that, in the long run, it is near constant for any individual. Review the following articles for additional information:
- Your Brain Limits You to Just Five BFFs
- Limited communication capacity unveils strategies for human interaction
End of explanation
bc.utils.all(B)
Explanation: Note:
It appears as though there might be a difference between the number of people contacted by phone between the weekend and weekdays.
All available features
Bandicoot currently contains 1442 features. You can obtain a quick overview of the features for this data set using the Bandicoot "utils.all" function. The three categories of indicators are individual, spatial, and network-related features.
End of explanation
# Your code here.
Explanation: Note:
The “reporting” variables allow you to better understand the nature and origin of the data, as well as which computations have been performed (which version of the code, etc.).
<br>
<div class="alert alert-info">
<b>Exercise 2 Start.</b>
</div>
Instructions
Using Bandicoot, find the user activity rate during the week and on weekends. Show your calculations and express your answer as a percentage using the print statement.
Note: Five days constitute the maximum number of weekdays, and two days are the maximum possible number of weekend days.
End of explanation
# Import the relevant libraries.
import os
from IPython.display import IFrame
# Set the path to store the visualization.
viz_path = os.path.dirname(os.path.realpath(__name__)) + '/viz'
# Create the visualization.
bc.visualization.export(B, viz_path)
# Display the visualization in a frame within this notebook.
IFrame("./viz/index.html", "100%", 700)
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 2 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
3. Visualization with Bandicoot
Now that you have more background information on the toolbox and its capabilities, the visualization demonstrated in Module 2 will be repeated. As Yves-Alexandre de Montjoye mentioned in the video content, visualization is a powerful tool. This is not only with regard to communicating your final results, but also in terms of checking the validity of your data, in order to identify errors and outliers. Bandicoot is also a powerful tool when used in visually identifying useful patterns that are hidden by the aggregation processes applied to the raw data.
<div class="alert alert-warning">
<b>Note</b>:<br>
There is a problem with the current version of Jupyter notebooks which does not render HTML portions correctly and the frame below is not functional at this stage. It is included for students who utilize the code elsewhere and an static image of the output is included below.
</div>
End of explanation
# Specify the network folder containing all the data.
network_folder = '../data/bandicoot/network_records/'
# Create Bandicoot object.
BN = bc.read_csv(user_id, network_folder, attributes_path='../data/bandicoot/attributes',network=True)
Explanation: Image displaying sample output:
<img src="M5NB1_Img.png" alt="Bandicoot output screenshot" style="width: 500px;"/>
Note:
To serve the results in the notebook, "IFrame" is used. You can also serve the results as a web page using tools provided in Bandicoot. This function will not be demonstrated in this course, as the required ports on the AWS virtual analysis environment have not been opened.
You can review the Bandicoot quickstart guide for more details on the "bc.visualization.run(U)" command. You can use this function to serve the visualization as a web page if you choose to install bandicoot on infrastructure where you do have access to the default port (4242). (This port is not open on your AWS virtual analysis environment.)
4. Graphs and matrices
This section contains network indicators, a gender assortativity example, and a brief demonstration of how to use Bandicoot to generate input for visualizations, using NetworkX. At the start of the course, Professor Pentland described general patterns in behavior that are observed between individuals. Understanding an individual as a part of a network is an extremely useful way to evaluate how they resemble or do not resemble their friends, as well as the role they play in their network or community.
In the current “Friends and Family” data set, the majority of interactions take place outside of the population in the study. Therefore, performing the calculations on this data set does not make sense. This is because the data is not representative of the full network of contacts. In a commercial application, you would most likely encounter a similar situation as there are multiple carriers, each with only a portion of the total market share. The figures differ per country, but typically fall in the range of 10-30% market share for the main (dominant) carriers.
You need to prepare a separate, trimmed data set to demonstrate this example.
A useful feature of Bandicoot is that it analyzes a user's ego network, or individual focus node quickly, if the input data is properly formatted. Start by loading the "ego" in question to a Bandicoot object. You need to set the network parameter to "True". Bandicoot will attempt to extract all "ego" interaction data, and do the network analysis for the data contained in the specified network folder.
4.1 Load the data
End of explanation
# Index of the adjacency matrix - user_ids participating in the network.
node_labels = bc.network.matrix_index(BN)
node_labels
Explanation: The Bandicoot "read_csv()" function loads the data, provides summary information, and removes the records that are not of interest in the analysis. Typically, performing the data cleansing steps is time-consuming, and prone to error or inconsistencies.
The graph data is stored as an adjacency matrix.
Note:
You will recall adjacency matrices (from Module 4) as a useful mechanism to represent finite graphs. Bandicoot stores graph information in an adjacency matrix, and said matrix indexes in a different object. Once the data has been loaded, you can start exploring the graph.
4.2 Network indicators
End of explanation
# Directed unweighted matrix.
directed_unweighted = bc.network.matrix_directed_unweighted(BN)
directed_unweighted
# Undirected weighted matrix.
undirected_weighted = bc.network.matrix_undirected_weighted(BN)
undirected_weighted
Explanation: There are several types of adjacency matrices available in Bandicoot, including the following:
bc.network.matrix_directed_weighted(network_user)
bc.network.matrix_directed_unweighted(network_user)
bc.network.matrix_undirected_weighted(network_user)
bc.network.matrix_undirected_unweighted(network_user)
You can review the Bandicoot network documentation for additional information.
End of explanation
bc.network.assortativity_attributes(BN)['gender']
Explanation: 4.3 Gender assortativity
This indicator computes the assortativity of nominal attributes. More specifically, it measures the similarity of the current user to their correspondents for all Bandicoot indicators. For each one, it calculates the variance of the current user’s value with the values for all of their correspondents. This indicator measures the homophily of the current user with their correspondents, for each attribute. It returns a value between 0 (no assortativity) and 1 (all the contacts share the same value), which indicates the percentage of contacts sharing the same value.
Let's demonstrate this by reviewing the gender assortativity.
End of explanation
# Load the relevant libraries and set plotting options.
import networkx as nx
%matplotlib inline
matplotlib.rcParams['figure.figsize'] = (18,11)
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 3 Start.</b>
</div>
Instructions
In the previous example, you obtained a value of 0.714 or 71.4% for gender assortativity. Random behavior would typically deliver a value centered around 50%, if you have enough data points.
Question: Do you think the value of 71.4% is meaningful or relevant?
Your answer should consist of “Yes” or “No”, and a short description of what you think the value obtained means, in terms of the data set.
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 3 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
4.4 Ego network visualization
You can use the ego network adjacency matrices for further analyses in NetworkX.
End of explanation
# Create the graph objects.
G_directed_unweighted = nx.DiGraph(nx.from_numpy_matrix(np.array(directed_unweighted)))
G_undirected_weighted = nx.from_numpy_matrix(np.array(undirected_weighted))
node_labels = dict(enumerate(node_labels))
Explanation: Create directed unweighted and undirected weighted graphs to visualize network, in order to better understand the user behavior (as per the examples in Section 1.2 of Module 4’s Notebook 2).
End of explanation
# Plot the graph.
layout = nx.spring_layout(G_directed_unweighted)
nx.draw_networkx(G_directed_unweighted, layout, node_color='blue', alpha=0.4, node_size=2000)
_ = nx.draw_networkx_labels(G_directed_unweighted, layout, node_labels)
_ = nx.draw_networkx_edges(G_directed_unweighted, layout,arrows=True)
Explanation: 4.4.1 Plot the directed unweighted graph
This can typically be utilized to better understand the flow or spread of information in a network.
End of explanation
# Plot the graph.
layout = nx.spring_layout(G_directed_unweighted)
nx.draw_networkx(G_undirected_weighted, layout, node_color='blue', alpha=0.4, node_size=2000)
_ = nx.draw_networkx_labels(G_undirected_weighted, layout, node_labels)
Explanation: 4.4.2 Plot the undirected weighted graph
This can typically be utilized to better understand the importance of the various individuals and their interactions in the network.
End of explanation
# Set the path and user for demonstration purposes.
antenna_file = '../data/bandicoot/antennas.csv'
attributes_path = '../data/bandicoot/attributes/'
records_with_errors_path = '../data/bandicoot/records/'
error_user_id = 'fa10-01-04'
Explanation: Note:
Can you think of use cases for the various networks introduced in Module 4?
Feel free to discuss these with your fellow students on the forums.
5. Data error handling
This section demonstrates some of Bandicoot’s error handling and reporting strategies for some of the "faulty" users. Some circumstances may require working with CDR records (and collected mobile phone metadata) that have been corrupted. The reasons for this can be numerous, but typically include wrong formats, faulty files, empty periods of time, and missing users. Bandicoot will not attempt to correct errors, as this might lead to incorrect analyses. Correctness is key in data science, and Bandicoot will:
Warn you when you attempt to import corrupted data;
Remove faulty records; and
Report on more than 30 variables (such as the number of contacts, types of records, records containing location), warning you of potential issues when exporting indicators.
5.1 Bandicoot CSV import
Importing CSV files with Bandicoot will produce warnings about:
No files containing data being found in the specified path;
The percentage of records missing location information;
The number of antennas missing geotags (provided the antenna file has been loaded);
The fraction of duplicated records; and
The fraction of calls with an overlap bigger than 5 minutes.
End of explanation
errors = bc.read_csv(error_user_id, records_with_errors_path )
Explanation: 5.2 Error example
End of explanation
errors.ignored_records
Explanation: Review the errors below to quickly get a view of your data set. This example includes warnings that are in addition to the missing location and antenna warnings explained earlier. The new warnings include:
Missing values of call duration;
Duplicate records; and
Overlapping records.
5.2.1 Rows with missing values
These rows are prudently excluded, and their details can be examined using “errors.ignored_records”.
End of explanation
errors = bc.read_csv(error_user_id, records_with_errors_path, drop_duplicates=True)
Explanation: 5.2.2 Duplicated records
These records are retained by default, but you can change this behavior by adding the parameter “drop_duplicates=True” when loading files.
Warning:
Exercise caution when using this option. The maximum timestamp resolution is one minute, and some of the records that appear to be duplicates may in fact be distinct text messages, or even, although very unlikely, very short calls. As such, it is generally advised that you examine the records before removing them.
End of explanation
# View the files in the directory using the operating system list function.
!ls ../data/bandicoot/clean_records/
Explanation: <br>
<div class="alert alert-info">
<b>Exercise 4 Start.</b>
</div>
Instructions
When working with data of any size or volume, data error handling can be a complex task.
1. List three important topics to consider when performing data error handling.
2. Provide a short description of your view of the topics to consider. Your answer should be one or two sentences in length, and can be based on an insight that you reached while completing the course material or from previous experience.
Your markdown answer here.
<br>
<div class="alert alert-info">
<b>Exercise 4 End.</b>
</div>
Exercise complete:
This is a good time to "Save and Checkpoint".
6. Loading the full data set
In this section, you will load the full “Friends and Family” reality commons data set, and compute all of the metrics (briefly introduced in Section 2.2.3) for all of the users. You need to specify a "flat" directory, containing files where each file corresponds to a single user, as input. It is crucial that the record-file naming convention is being observed (i.e., the names of the files are the user IDs), and that each user's data resides in a separate file.
End of explanation
# Load libraries and set path options.
import glob, os
records_path = '../data/bandicoot/clean_records/'
# Create an empty list and then cycle through each of the available files in the directory to add features.
features = []
for f in glob.glob(records_path + '*.csv'):
user_id = os.path.basename(f)[:-4]
try:
B = bc.read_csv(user_id, records_path, attributes_path=attributes_path, describe=False, warnings=False)
metrics_dict = bc.utils.all(B, summary='extended', split_day=True, split_week=True)
except Exception as e:
metrics_dict = {'name': user_id, 'error': str(e)}
features.append(metrics_dict)
Explanation: 6.1 Load the files and create a metric
Review the Bandicoot "utils.all" page for more detail.
End of explanation
bc.io.to_csv(features, 'all_features.csv')
Explanation: 6.2 Save the interactions in a file for future use
Note: The application of machine learning techniques, using a similar data set, will be explored in the next notebook.
End of explanation
# Display the length or number of users with in the features list.
len(features)
# Print the list of users' names contained in features list
for u in features:
print(u['name'])
# Print the various groups of behavioral indicators (and attributes) that are available for each user.
# You will use the first user's data in the feature list for this.
[key for key,value in features[0].items()]
Explanation: Before moving on, take a quick look at the results of the pipeline.
6.2.1 Review the data for the first user
Keep in mind that, in manual approaches, you would likely have to create each of these features by hand. The process entails thinking about features, and reviewing available literature to identify applicable features. These features are used in machine learning techniques (including feature selection) for various use cases.
Note:
The section below will display a large number of features for the first user. You do not need to review them in detail. Here, the intention is to emphasize the ease of creating features, and the advantages of computationally-optimized functions. These are extremely useful when scaling your analyses to large record sets (such as those typically found in the telecommunications industry).
6.2.2 Review the features list
End of explanation |
11,891 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ARTICLE CLUSTERS
Cluster Inertia
Cluster article data and compute inertia as a function of cluster number
Step1: Silhouette Score
Cluster article data and compute the silhouette score as a function of cluster number
Step2: Cross-Validation
Split data set for cross-validation
Step3: Clustering consistency between the full and partial data sets
Step4: Number of Clusters = 4 | Python Code:
from sklearn import cluster
import pandas as pd
import numpy as np
import pickle
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
num_topics = 20
doc_data = pickle.load(open('pub_probabs_topic'+str(num_topics)+'.pkl','rb'))
lda_topics = ['topic'+str(i) for i in range(0,num_topics)]
cluster_dims = ['source','trust'] + lda_topics
cluster_data = doc_data[cluster_dims].values
# Inertia (within-cluster sum of squares criterion) is a measure of how internally coherent clusters are
MAX_K = 10
ks = range(1,MAX_K+1)
inertias = np.zeros(MAX_K)
for k in ks:
kmeans = cluster.KMeans(k).fit(cluster_data)
inertias[k-1] = kmeans.inertia_
with sns.axes_style("whitegrid"):
plt.plot(ks, inertias)
plt.ylabel("Average inertia")
plt.xlabel("Number of clusters")
plt.show()
Explanation: ARTICLE CLUSTERS
Cluster Inertia
Cluster article data and compute inertia as a function of cluster number
End of explanation
from sklearn.metrics import silhouette_score
import random
num_topics = 20
doc_data = pickle.load(open('pub_probabs_topic'+str(num_topics)+'.pkl','rb'))
lda_topics = ['topic'+str(i) for i in range(0,num_topics)]
cluster_dims = ['source','trust'] + lda_topics
cluster_data = doc_data[cluster_dims].values
# The silhouette score is a measure of the density and separation of the formed clusters
seed = 42
MAX_K = 10
ks = range(1,MAX_K+1)
silhouette_avg = []
for i,k in enumerate(ks[1:]):
kmeans = cluster.KMeans(n_clusters=k,random_state=seed).fit(cluster_data)
kmeans_clusters = kmeans.predict(cluster_data)
silhouette_avg.append(silhouette_score(cluster_data,kmeans_clusters))
with sns.axes_style("whitegrid"):
plt.plot(ks[1:], silhouette_avg)
plt.ylabel("Average silhouette score")
plt.xlabel("Number of clusters")
plt.ylim([0.0,1.0])
plt.show()
Explanation: Silhouette Score
Cluster article data and compute the silhouette score as a function of cluster number
End of explanation
num_topics = 20
doc_data = pickle.load(open('pub_probabs_topic'+str(num_topics)+'.pkl','rb'))
lda_topics = ['topic'+str(i) for i in range(0,num_topics)]
cluster_dims = ['source','trust'] + lda_topics
cluster_data = doc_data[cluster_dims].values
num_folds = 5
seed = 42
np.random.seed(seed)
np.random.shuffle(cluster_data) # Shuffles in-place
cluster_data = np.split(cluster_data[0:-1,:],num_folds) # Make divisible by 10
train_data,test_data= [],[]
for hold in range(num_folds):
keep = [i for i in list(range(num_folds)) if i != hold]
train = [cluster_data[i] for i in keep]
test = cluster_data[hold]
train_data.append(np.vstack(train))
test_data.append(test)
full = [cluster_data[i] for i in list(range(num_folds))]
full_data = np.vstack(full)
Explanation: Cross-Validation
Split data set for cross-validation
End of explanation
MAX_K = 10
ks = range(1,MAX_K+1)
kmeans_accuracy = []
for k in ks:
full_kmeans = cluster.KMeans(n_clusters=k,random_state=seed).fit(full_data)
accuracy = []
for fold in range(num_folds):
train_kmeans = cluster.KMeans(n_clusters=k,random_state=seed).fit(train_data[fold])
test_labels = train_kmeans.predict(test_data[fold])
full_labels = np.split(full_kmeans.labels_,num_folds)[fold]
accuracy.append(1.0 * np.sum(np.equal(full_labels,test_labels)) / len(test_labels))
kmeans_accuracy.append(np.mean(accuracy))
with sns.axes_style("whitegrid"):
plt.plot(ks, kmeans_accuracy)
plt.ylabel("Average accuracy")
plt.xlabel("Number of clusters")
plt.ylim([0.0,1.0])
plt.show()
Explanation: Clustering consistency between the full and partial data sets
End of explanation
num_clusters = 4
kmeans = cluster.KMeans(n_clusters=num_clusters,random_state=seed).fit(full_data)
kmeans_labels = kmeans.labels_
kmeans_centroids = kmeans.cluster_centers_
# 0 = mostly liberal, 1 = mostly conservative, 2 = mixed liberal, 3 = mixed conservative
kmeans_distances = kmeans.transform(full_data)
pickle.dump([kmeans,kmeans_labels,kmeans_centroids,kmeans_distances],
open('pub_kmeans_clean_cluster'+str(num_clusters)+'.pkl','wb'))
Explanation: Number of Clusters = 4
End of explanation |
11,892 | Given the following text description, write Python code to implement the functionality described.
Description:
Tribonacci Numbers
Program to print first n tribonacci numbers Matrix Multiplication function for 3 * 3 matrix ; Recursive function to raise the matrix T to the power n ; base condition . ; recursively call to square the matrix ; calculating square of the matrix T ; if n is odd multiply it one time with M ; base condition ; T [ 0 ] [ 0 ] contains the tribonacci number so return it ; Driver Code
| Python Code:
def multiply(T , M ) :
a =(T[0 ][0 ] * M[0 ][0 ] + T[0 ][1 ] * M[1 ][0 ] + T[0 ][2 ] * M[2 ][0 ] )
b =(T[0 ][0 ] * M[0 ][1 ] + T[0 ][1 ] * M[1 ][1 ] + T[0 ][2 ] * M[2 ][1 ] )
c =(T[0 ][0 ] * M[0 ][2 ] + T[0 ][1 ] * M[1 ][2 ] + T[0 ][2 ] * M[2 ][2 ] )
d =(T[1 ][0 ] * M[0 ][0 ] + T[1 ][1 ] * M[1 ][0 ] + T[1 ][2 ] * M[2 ][0 ] )
e =(T[1 ][0 ] * M[0 ][1 ] + T[1 ][1 ] * M[1 ][1 ] + T[1 ][2 ] * M[2 ][1 ] )
f =(T[1 ][0 ] * M[0 ][2 ] + T[1 ][1 ] * M[1 ][2 ] + T[1 ][2 ] * M[2 ][2 ] )
g =(T[2 ][0 ] * M[0 ][0 ] + T[2 ][1 ] * M[1 ][0 ] + T[2 ][2 ] * M[2 ][0 ] )
h =(T[2 ][0 ] * M[0 ][1 ] + T[2 ][1 ] * M[1 ][1 ] + T[2 ][2 ] * M[2 ][1 ] )
i =(T[2 ][0 ] * M[0 ][2 ] + T[2 ][1 ] * M[1 ][2 ] + T[2 ][2 ] * M[2 ][2 ] )
T[0 ][0 ] = a
T[0 ][1 ] = b
T[0 ][2 ] = c
T[1 ][0 ] = d
T[1 ][1 ] = e
T[1 ][2 ] = f
T[2 ][0 ] = g
T[2 ][1 ] = h
T[2 ][2 ] = i
def power(T , n ) :
if(n == 0 or n == 1 ) :
return ;
M =[[ 1 , 1 , 1 ] ,[1 , 0 , 0 ] ,[0 , 1 , 0 ] ]
power(T , n // 2 )
multiply(T , T )
if(n % 2 ) :
multiply(T , M )
def tribonacci(n ) :
T =[[ 1 , 1 , 1 ] ,[1 , 0 , 0 ] ,[0 , 1 , 0 ] ]
if(n == 0 or n == 1 ) :
return 0
else :
power(T , n - 2 )
return T[0 ][0 ]
if __name__== "__main __":
n = 10
for i in range(n ) :
print(tribonacci(i ) , end = "▁ ")
print()
|
11,893 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter
Step1: Lesson
Step2: Project 1
Step5: Transforming Text into Numbers | Python Code:
def pretty_print_review_and_label(i):
print(labels[i] + "\t:\t" + reviews[i][:80] + "...")
g = open('reviews.txt','r') # What we know!
reviews = list(map(lambda x:x[:-1],g.readlines()))
g.close()
g = open('labels.txt','r') # What we WANT to know!
labels = list(map(lambda x:x[:-1].upper(),g.readlines()))
g.close()
len(reviews)
reviews[0]
labels[0]
Explanation: Sentiment Classification & How To "Frame Problems" for a Neural Network
by Andrew Trask
Twitter: @iamtrask
Blog: http://iamtrask.github.io
What You Should Already Know
neural networks, forward and back-propagation
stochastic gradient descent
mean squared error
and train/test splits
Where to Get Help if You Need it
Re-watch previous Udacity Lectures
Leverage the recommended Course Reading Material - Grokking Deep Learning (40% Off: traskud17)
Shoot me a tweet @iamtrask
Tutorial Outline:
Intro: The Importance of "Framing a Problem"
Curate a Dataset
Developing a "Predictive Theory"
PROJECT 1: Quick Theory Validation
Transforming Text to Numbers
PROJECT 2: Creating the Input/Output Data
Putting it all together in a Neural Network
PROJECT 3: Building our Neural Network
Understanding Neural Noise
PROJECT 4: Making Learning Faster by Reducing Noise
Analyzing Inefficiencies in our Network
PROJECT 5: Making our Network Train and Run Faster
Further Noise Reduction
PROJECT 6: Reducing Noise by Strategically Reducing the Vocabulary
Analysis: What's going on in the weights?
Lesson: Curate a Dataset
End of explanation
print("labels.txt \t : \t reviews.txt\n")
pretty_print_review_and_label(2137)
pretty_print_review_and_label(12816)
pretty_print_review_and_label(6267)
pretty_print_review_and_label(21934)
pretty_print_review_and_label(5297)
pretty_print_review_and_label(4998)
Explanation: Lesson: Develop a Predictive Theory
End of explanation
from collections import Counter
import numpy as np
positive_counts = Counter()
negative_counts = Counter()
total_counts = Counter()
for i in range(len(reviews)):
if(labels[i] == 'POSITIVE'):
for word in reviews[i].split(" "):
positive_counts[word] += 1
total_counts[word] += 1
else:
for word in reviews[i].split(" "):
negative_counts[word] += 1
total_counts[word] += 1
positive_counts.most_common()
pos_neg_ratios = Counter()
for term,cnt in list(total_counts.most_common()):
if(cnt > 100):
pos_neg_ratio = positive_counts[term] / float(negative_counts[term]+1)
pos_neg_ratios[term] = pos_neg_ratio
for word,ratio in pos_neg_ratios.most_common():
if(ratio > 1):
pos_neg_ratios[word] = np.log(ratio)
else:
pos_neg_ratios[word] = -np.log((1 / (ratio+0.01)))
# words most frequently seen in a review with a "POSITIVE" label
pos_neg_ratios.most_common()
# words most frequently seen in a review with a "NEGATIVE" label
list(reversed(pos_neg_ratios.most_common()))[0:30]
Explanation: Project 1: Quick Theory Validation
End of explanation
from IPython.display import Image
review = "This was a horrible, terrible movie."
Image(filename='sentiment_network.png')
review = "The movie was excellent"
Image(filename='sentiment_network_pos.png')
vocab = set(total_counts.keys())
vocab_size = len(vocab)
print(vocab_size)
layer_0 = np.zeros((1, vocab_size))
layer_0
word2Index = {}
for i, word in enumerate(vocab):
word2Index[word] = i
word2Index
def update_input_layer(review):
Modify the global layer_0 to represent the vector form of review.
The element at a given index of layer_0 should represent \
how many times the given word occurs in the review.
Args:
review(string) - the string of the review
Returns:
None
global layer_0
# clear out previous state, reset the layer to be all 0s
layer_0 *= 0
## Your code here
for word in review.split(" "):
layer_0[0][word2Index[word]] += 1
update_input_layer(reviews[0])
layer_0
def get_target_for_label(label):
Convert a label to `0` or `1`.
Args:
label(string) - Either "POSITIVE" or "NEGATIVE".
Returns:
`0` or `1`.
if label == 'POSITIVE':
return 1
else:
return 0
get_target_for_label(labels[0])
labels[1]
get_target_for_label(labels[1])
Explanation: Transforming Text into Numbers
End of explanation |
11,894 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature Engineering for Time Series Problems
Time series forecasting consists of predicting future values of a target using earlier observations. In datasets that are used in time series problems, there is an inherent temporal ordering to the data (determined by a time index), and the sequential target values we're predicting are highly dependent on one another. Feature engineering for time series problems exploits the fact that more recent observations are more predictive than more distant ones.
This guide will explore how to use Featuretools for automating feature engineering for univariate time series problems, or problems in which only the time index and target column are included.
We'll be working with a temperature demo EntitySet that contains one DataFrame, temperatures. The temperatures dataframe contains the minimum daily temperatures that we will be predicting. In total, it has three columns
Step1: Understanding The Feature Engineering Window
In multi-table datasets, a feature engineering window for a single row in the target DataFrame extends forward in time over observations in child DataFrames starting at the time index and ending when either the cutoff time or last time index is reached.
In single-table time series datasets, the feature engineering window for a single value extends backwards in time within the same column. Because of this, the concepts of cutoff time and last time index are not relevant in the same way.
For example
Step2: With these two parameters (gap and window_length) set, we have defined our feature engineering window. Now, we can move onto defining our feature primitives.
Time Series Primitives
There are three types of primitives we'll focus on for time series problems. One of them will extract features from the time index, and the other two types will extract features from our target column.
Datetime Transform Primitives
We need a way of implicating time in our time series features. Yes, using recent temperatures is incredibly predictive in determining future temperatures, but there is also a whole host of historical data suggesting that the month of the year is a pretty good indicator for the temperature outside. However, if we look at the data, we'll see that, though the day changes, the observations are always taken at the same hour, so the Hour primitive will not likely be useful. Of course, in a dataset that is measured at an hourly frequency or one more granular, Hour may be incrediby predictive.
Step3: The full list of datetime transform primitives can be seen here.
Delaying Primitives
The simplest thing we can do with our target column is to build features that are delayed (or lagging) versions of the target column. We'll make one feature per observation in our feature engineering windows, so we'll range over time from t - gap - window_length to t - gap.
For this purpose, we can use our NumericLag primitive and create one primitive for each instance in our window.
Step4: Rolling Transform Primitives
Since we have access to the entire feature engineering window, we can aggregate over that window. Featuretools has several rolling primitives with which we can achieve this. Here, we'll use the RollingMean and RollingMin primitives, setting the gap and window_length accordingly. Here, the gap is incredibly important, because when the gap is zero, it means the current observation's taret value is present in the window, which exposes our target.
This concern also exists for other primitives that reference earlier values in the dataframe. Because of this, when using primitives for time series feature engineering, one must be incredibly careful to not use primitives on the target column that incorporate the current observation when calculating a feature value.
Step5: The full list of rolling transform primitives can be seen here.
Run DFS
Now that we've definied our time series primitives, we can pass them into DFS and get our feature matrix!
Let's take a look at an actual feature engineering window as we defined with gap and window_length above. Below is an example of how we can extract many features using the same feature engineering window without exposing our target value.
With the image above, we see how all of our defined primitives get used to create many features from just the two columns we have access to. | Python Code:
es = load_weather()
es['temperatures'].head(10)
es['temperatures']['Temp'].plot(ylabel='Temp (C)')
Explanation: Feature Engineering for Time Series Problems
Time series forecasting consists of predicting future values of a target using earlier observations. In datasets that are used in time series problems, there is an inherent temporal ordering to the data (determined by a time index), and the sequential target values we're predicting are highly dependent on one another. Feature engineering for time series problems exploits the fact that more recent observations are more predictive than more distant ones.
This guide will explore how to use Featuretools for automating feature engineering for univariate time series problems, or problems in which only the time index and target column are included.
We'll be working with a temperature demo EntitySet that contains one DataFrame, temperatures. The temperatures dataframe contains the minimum daily temperatures that we will be predicting. In total, it has three columns: id, Temp, and Date. The id column is the index that is necessary for Featuretools' purposes. The other two are important for univariate time series problems: Date is our time index, and Temp is our target column. The engineered features will be built from these two columns.
End of explanation
gap = 7
window_length = 5
Explanation: Understanding The Feature Engineering Window
In multi-table datasets, a feature engineering window for a single row in the target DataFrame extends forward in time over observations in child DataFrames starting at the time index and ending when either the cutoff time or last time index is reached.
In single-table time series datasets, the feature engineering window for a single value extends backwards in time within the same column. Because of this, the concepts of cutoff time and last time index are not relevant in the same way.
For example: The cutoff time for a single-table time series dataset would create the training and test data split. During DFS, features would not be calculated after the cutoff time. This same behavior can often times be achieved more simply by splitting the data prior to creating the EntitySet, since filtering the data at feature matrix calculation is more computationally intensive than splitting the data ahead of time.
```
split_point = int(df.shape[0]*.7)
training_data = df[:split_point]
test_data = df[split_point:]
```
So, since we can't use the existing parameters for defining each observation's feature engineering window, we'll need to define new the concepts of gap and window_length. These will allow us to set a feature engineering window that exists prior to each observation.
Gap and Window Length
Note that we will be using integers when defining the gap and window length. This implies that our data occurs at evenly spaced intervals--in this case daily--so a number n corresponds to n days. Support for unevenly spaced intervals is ongoing and can be explored with the Woodwork method df.ww.infer_temporal_frequencies.
If we are at a point in time t, we have access to information from times less than t (past values), and we do not have information from times greater than t (future values). Our limitations in feature engineering, then, will come from when exactly before t we have access to the data.
Consider an example where we're recording data that takes a week to ingest; the earliest data we have access to is from seven days ago, or t - 7. We'll call this our gap. A gap of 0 would include the instance itself, which we must be careful to avoid in time series problems, as this exposes our target.
We also need to determine how far back in time before t - 7 we can go. Too far back, and we may lose the potency of our recent observations, but too recent, and we may not capture the full spectrum of behaviors displayed by the data. In this example, let's say that we only want to look at 5 days worth of data at a time. We'll call this our window_length.
End of explanation
datetime_primitives = ["Day", "Year", "Weekday", "Month"]
Explanation: With these two parameters (gap and window_length) set, we have defined our feature engineering window. Now, we can move onto defining our feature primitives.
Time Series Primitives
There are three types of primitives we'll focus on for time series problems. One of them will extract features from the time index, and the other two types will extract features from our target column.
Datetime Transform Primitives
We need a way of implicating time in our time series features. Yes, using recent temperatures is incredibly predictive in determining future temperatures, but there is also a whole host of historical data suggesting that the month of the year is a pretty good indicator for the temperature outside. However, if we look at the data, we'll see that, though the day changes, the observations are always taken at the same hour, so the Hour primitive will not likely be useful. Of course, in a dataset that is measured at an hourly frequency or one more granular, Hour may be incrediby predictive.
End of explanation
delaying_primitives = [NumericLag(periods=i + gap) for i in range(window_length)]
Explanation: The full list of datetime transform primitives can be seen here.
Delaying Primitives
The simplest thing we can do with our target column is to build features that are delayed (or lagging) versions of the target column. We'll make one feature per observation in our feature engineering windows, so we'll range over time from t - gap - window_length to t - gap.
For this purpose, we can use our NumericLag primitive and create one primitive for each instance in our window.
End of explanation
rolling_mean_primitive = RollingMean(window_length=window_length,
gap=gap,
min_periods=window_length)
rolling_min_primitive = RollingMin(window_length=window_length,
gap=gap,
min_periods=window_length)
Explanation: Rolling Transform Primitives
Since we have access to the entire feature engineering window, we can aggregate over that window. Featuretools has several rolling primitives with which we can achieve this. Here, we'll use the RollingMean and RollingMin primitives, setting the gap and window_length accordingly. Here, the gap is incredibly important, because when the gap is zero, it means the current observation's taret value is present in the window, which exposes our target.
This concern also exists for other primitives that reference earlier values in the dataframe. Because of this, when using primitives for time series feature engineering, one must be incredibly careful to not use primitives on the target column that incorporate the current observation when calculating a feature value.
End of explanation
fm, f = ft.dfs(entityset=es,
target_dataframe_name='temperatures',
trans_primitives = (datetime_primitives +
delaying_primitives +
[rolling_mean_primitive, rolling_min_primitive]),
cutoff_time=pd.Timestamp('1987-1-30')
)
f
fm.iloc[:,[0,2, 6, 7, 8, 9]].head(15)
Explanation: The full list of rolling transform primitives can be seen here.
Run DFS
Now that we've definied our time series primitives, we can pass them into DFS and get our feature matrix!
Let's take a look at an actual feature engineering window as we defined with gap and window_length above. Below is an example of how we can extract many features using the same feature engineering window without exposing our target value.
With the image above, we see how all of our defined primitives get used to create many features from just the two columns we have access to.
End of explanation |
11,895 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Micromagnetic standard problem 4
Author
Step1: We set all the necessary simulation parameters.
Step2: First stage
In the first stage, we relax the system at zero external magnetic field.
Required modules are imported
Step3: Now, atlas, mesh, and simulation objects are created.
Step4: Energy terms (exchange and demagnetisation) are added to the system's Hamiltonian.
Step5: System is initialised so that the magnetisation at all mesh cells is $(1, 0.25, 0.1)$.
Step6: At this point, system can be relaxed. We relax the system by simulating the magnetisation time evolution for $5 \,\text{ns}$ with default value of Gilbert damping $\alpha = 1$.
Step7: Using the magnetisation field object, we can compute the magnetisation average, sample the magnetisation at the point, or plot the magnetisation slice of the sample.
Step8: More extensive list of the obtained of the system's parameters can be obtained using
Step9: Second stage
In the second stage, the relaxed state from the first stage is used as an initial state. Now, the external magnetic field is applied $\mathbf{H}_{1} = (-24.6, 4.3, 0.0) \,\text{mT}$.
Step10: In this stage, we use a smaller value of Gilbert damping in comparison to the first stage, where default value (alpha=1) was used.
Step11: Now, we can run the simulation for $1 \,\text{ns}$ and save the magnetisation field at 500 time steps.
Step12: Postprocessing
A detailed table of all computed parameters from the multiple stage simulation can be shown from the pandas dataframe.
Step13: A single computed parameter (average my and simulation time in this case) can be extracted as an array using
Step14: After we obtained the averae magnetisation, we can plot it and compare to the already reported results in Ref. 1. | Python Code:
!rm -rf standard_problem4/
Explanation: Micromagnetic standard problem 4
Author: Marijan Beg
Date: 11 May 2016
Problem specification
The simulated sample is a thin film cuboid with dimensions:
- length $L_{x} = 500 \,\text{nm}$,
- width $L_{y} = 125 \,\text{nm}$, and
- thickness $t = 3 \,\text{nm}$.
The material parameters (similar to permalloy) are:
exchange energy constant $A = 1.3 \times 10^{-11} \,\text{J/m}$,
magnetisation saturation $M_\text{s} = 8 \times 10^{5} \,\text{A/m}$.
Magnetisation dynamics is governed by the Landau-Lifshitz-Gilbert equation
$$\frac{d\mathbf{m}}{dt} = -\gamma_{0}(\mathbf{m} \times \mathbf{H}_\text{eff}) + \alpha\left(\mathbf{m} \times \frac{d\mathbf{m}}{dt}\right)$$
where $\gamma_{0} = 2.211 \times 10^{5} \,\text{m}\,\text{A}^{-1}\,\text{s}^{-1}$ is the gyromagnetic ratio and $\alpha=0.02$ is the Gilbert damping.
In the standard problem 4, the system is firstly relaxed at zero external magnetic field and then, stating from the obtained equlibrium configuration, the magnetisation dynamics is simulated for the external magnetic field $\mathbf{H} = (-24.6, 4.3, 0.0) \,\text{mT}$.
The micromagnetic standard problem 4 specification can be also found in Ref. 1.
Simulation
End of explanation
import numpy as np
mu0 = 4*np.pi*1e-7 # magnetic constant (H/m)
# Sample parameters.
Lx = 500e-9 # x dimension of the sample(m)
Ly = 125e-9 # y dimension of the sample (m)
thickness = 3e-9 # sample thickness (m)
dx = dy = 5e-9 # discretisation in x and y directions (m)
dz = 3e-9 # discretisation in the z direction (m)
cmin = (0, 0, 0) # Minimum sample coordinate.
cmax = (Lx, Ly, thickness) # Maximum sample coordinate.
d = (dx, dy, dz) # Discretisation.
# Material (permalloy) parameters.
Ms = 8e5 # saturation magnetisation (A/m)
A = 1.3e-11 # exchange energy constant (J/m)
Explanation: We set all the necessary simulation parameters.
End of explanation
import sys
sys.path.append('../')
from sim import Sim
from atlases import BoxAtlas
from meshes import RectangularMesh
from energies.exchange import UniformExchange
from energies.demag import Demag
from energies.zeeman import FixedZeeman
Explanation: First stage
In the first stage, we relax the system at zero external magnetic field.
Required modules are imported:
End of explanation
atlas = BoxAtlas(cmin, cmax) # Create an atlas object.
mesh = RectangularMesh(atlas, d) # Create a mesh object.
sim = Sim(mesh, Ms, name='standard_problem4') # Create a simulation object.
Explanation: Now, atlas, mesh, and simulation objects are created.
End of explanation
sim.add(UniformExchange(A)) # Add exchange energy.
sim.add(Demag()) # Add demagnetisation energy.
Explanation: Energy terms (exchange and demagnetisation) are added to the system's Hamiltonian.
End of explanation
sim.set_m((1, 0.25, 0.1)) # initialise the magnetisation
Explanation: System is initialised so that the magnetisation at all mesh cells is $(1, 0.25, 0.1)$.
End of explanation
sim.run_until(5e-9) # run time evolution for 5 ns with default value of Gilbert damping (alpha=1)
Explanation: At this point, system can be relaxed. We relax the system by simulating the magnetisation time evolution for $5 \,\text{ns}$ with default value of Gilbert damping $\alpha = 1$.
End of explanation
# Compute the average magnetisation.
print 'The average magnetisation is', sim.m_average()
# Sample the magnetisation at the point.
c = (50e-9, 75e-9, 1e-9)
print 'The magnetisation at point {} is {}'.format(c, sim.m(c))
# Plot the slice.
%matplotlib inline
sim.m.plot_slice('z', 1e-9)
Explanation: Using the magnetisation field object, we can compute the magnetisation average, sample the magnetisation at the point, or plot the magnetisation slice of the sample.
End of explanation
print sim.data
Explanation: More extensive list of the obtained of the system's parameters can be obtained using:
End of explanation
# Add Zeeman energy.
H = np.array([-24.6, 4.3, 0.0])*1e-3 / mu0 # external magnetic field in the first stage
sim.add(FixedZeeman(H))
Explanation: Second stage
In the second stage, the relaxed state from the first stage is used as an initial state. Now, the external magnetic field is applied $\mathbf{H}_{1} = (-24.6, 4.3, 0.0) \,\text{mT}$.
End of explanation
sim.alpha = 0.02
Explanation: In this stage, we use a smaller value of Gilbert damping in comparison to the first stage, where default value (alpha=1) was used.
End of explanation
T = 1e-9
stages = 200
sim.run_until(T, stages)
Explanation: Now, we can run the simulation for $1 \,\text{ns}$ and save the magnetisation field at 500 time steps.
End of explanation
sim.odt_file.df.head(10)
Explanation: Postprocessing
A detailed table of all computed parameters from the multiple stage simulation can be shown from the pandas dataframe.
End of explanation
my_average = sim.odt_file.df['my'].as_matrix()
t_array = sim.odt_file.df['Simulationtime'].as_matrix()
Explanation: A single computed parameter (average my and simulation time in this case) can be extracted as an array using:
End of explanation
import matplotlib.pyplot as plt
# Plot the <my> time evolution.
plt.plot(t_array/1e-9, my_average)
plt.xlabel('t (ns)')
plt.ylabel('<my>')
plt.grid()
Explanation: After we obtained the averae magnetisation, we can plot it and compare to the already reported results in Ref. 1.
End of explanation |
11,896 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
2 - Getting started with SampleData
Step1: Before starting to create our datasets, we will take a look at the SampleData class documenation, to discover the arguments of the class constructor. You can read it on the pymicro.core package API doc page, or print interactively by executing
Step2: That is it. The class has created a new HDF5/XDMF pair of files, and associated the interface with this dataset to the variable data. No message has been returned by the code, how can we know that the dataset has been created ?
When the name of the file is not an absolute path, the default behavior of the class is to create the dataset in the current work directory. Let us print the content of this directory then !
Step3: The two files my_first_dataset.h5 and my_first_dataset.xdmf have indeed been created.
If you want interactive prints about the dataset creation, you can set the verbose argument to True. This will set the activate the verbose mode of the class. When it is, the class instance prints a lot of information about what it is doing. This flag can be set by using the set_verbosity method
Step4: Let us now close our dataset, and see if the class instance prints information about it
Step5: <div class="alert alert-info">
**Note**
It is a good practice to always delete your `SampleData` instances once you are done working with a dataset, or if you want to re-open it. As the class instance handles opened files as long as it exists, deleting it ensures that the files are properly closed. Otherwise, file may close at some random times or stay opened, and you may encounter undesired behavior of your datasets.
</div>
The class indeed returns some prints during the instance destruction. As you can see, the class instance wrights data into the pair of files, and then closes the dataset instance and the files.
Dataset opening and verbose mode
Let us now try to create a new SD instance for the same dataset file "my_first_dataset". As the dataset files (HDF5, XDMF) already exist, this new SampleData instance will open the dataset files and synchronize with them. with the verbose mode on. When activated, SampleData class instances will display messages about the actions performed by the class (creating, deleting data items for instance)
Step6: You can see that the printed information states that the dataset file my_first_dataset.h5 has been opened, and not created. This second instantiation of the class has not created a new dataset, but instead, has opened the one that we have just closed. Indeed, in that case, we provided a filename that already existed.
Some information about the dataset content are also printed by the class. This information can be retrived with specific methods that will be detailed in the next section of this Notebook. Let us focus for now on one part of it.
The printed info reveals that our dataset content is composed only of one object, a Group data object named /. This group is the Root Group of the dataset. Each dataset has necessarily a Root Group, automatically created along with the dataset. You can see that this Group already have a Child, named Index. This particular data object will be presented in the third section of this Notebook. You can also observe that the Root Group already has attributes (recall from introduction Notebook that they are Name/Value pairs used to store metadata in datasets). Two of those attributes match arguments of the SampleData class constructor
Step7: Overwriting datasets
The overwrite_hdf5 argument of the class constructor, if it is set to True, will remove the filename dataset and create a new empty one, if this dataset already exists
Step8: As you can see, the dataset files have been overwritten, as requested. We will now close our dataset again and continue to see the possibilities offered by the class constructor.
Step9: Copying dataset
One last thing that may be interesting to do with already existing dataset files, is to create a new dataset that is a copy of them, associated with a new class instance. This is usefull for instance when you have to try new processing on a set of valuable data, without risking to damage the data.
To do this, you may use the copy_sample method of the SampleData class. Its main arguments are
Step10: The copy_dataset HDF5 and XDMF files have indeed been created, and are a copy of the my_first_dataset HDF5 and XDMF files.
Note that the copy_sample is a static method, that can be called even without SampleData instance. Note also that it has a overwrite argument, that allows to overwrite an already existing dst_sample_file. It also has, like the class constructor, a autodelete argument, that we will discover in the next subsection.
Automatically removing dataset files
In some occasions, we may want to remove our dataset files after using our SampleData class instance. This can be the case for instance if you are trying some new data processing, or using the class for visualization purposes, and are not interested in keeping your test data.
The class has a autodelete attribute for this purpose. IF it is set to True, the class destructor will remove the dataset file pair in addition to deleting the class instance. The class constructor and the copy_sample method also have a autodelete argument, which, if True, will automatically set the class instance autodelete attribute to True.
To illustrate this feature, we will try to change the autodelete attribute of our copied dataset to True, and remove it.
Step11: The class destructor ends by priting a confirmation message of the dataset files removal in verbose mode, as you can see in the cell above.
Let us verify that it has been effectively deleted
Step12: As you can see, the dataset files have been suppressed. Now we can also open and remove our first created dataset using the class constructor autodelete option
Step13: Now, you now how to create or open SampleData datasets. Before starting to explore their content in detail, a last feature of the SampleData class must be introduced
Step14: 1- The Dataset Index
As explained in the previous section, all data items have a Path, and an Indexname. The collection of Indexname/Path pairs forms the Index of the dataset. For each SampleData dataset, an Index Group is stored in the root Group, and the collection of those pairs is stored as attributes of this Index Group. Additionnaly, a class attribute content_index stores them as a dictionary in the clas instance, and allows to access them easily. The dictionary and the Index Group attributes are automatically synchronized by the class.
Let us see if we can see the dictionary content
Step15: You should see the dictionary keys that are names of data items, and associated values, that are hdf5 pathes. You can see also data item Names at the end of their Pathes. The data item aliases are also stored in a dictionary, that is an attribute of the class, named aliases
Step16: You can see that this dictionary contains keys only for data item that have additional names, and also that those keys are the data item indexnames.
The dataset index can be plotted together with the aliases, with a prettier aspect, by calling the method print_index
Step17: This method prints the content of the dataset Index, with a given depth and from a specific root. The depth is the number of parents that a data item has. The root Group has thus a depth of 0, its children a depth of 1, the children of its children a depth of 2, and so on... The local root argument can be changed, to print only the Index for data items that are children of a specific group. When used without arguments, print_index uses a depth of 3 and the dataset root as default settings.
As you can see, our dataset already contains some data items. We can already identify at least 3 HDF5 Groups (test_group, test_image, test_group), as they have childrens, and a lot of other data items.
Let us try different to print Indexes with different parameters. To start, Let us try to print the Index from a different local root, for instance the group with the path /test_image. The way to do it is to use the local_root argument. We will hence give it the value of the /test_image path.
Step18: The print_index method local root arguments needs the name of the Group whose children Index must be printed. As explained in section II, you may use for this other identificators than its Path. Let us try its Name (last part of its path), which is test_image, or its Indexname, which is image
Step19: As you can see, the result is the same in the 3 cases.
Let us now try to print the dataset Index with a maximal data item depth of 2, using the max_depth argument
Step20: Of course, you can combine those two arguments
Step21: The print_index method is usefull to get a glimpse of the content and organization of the whole dataset, or some part of it, and to quickly see the short indexnames or aliases that you can use to refer to data items.
To add aliases to data items or Groups, you can use the add_alias method.
The Index allows to quickly see the internal structure of your dataset, however, it does not provide detailed information on the data items. We will now see how to retrieve it with the SampleData class.
2- The Dataset content
The SampleData class provides a method to print an organized and detailed overview of the data items in the dataset, the print_dataset_content method. Let us see what the methods prints when called with no arguments
Step22: As you can see, this method prints by increasing depth, detailed information on each Group and each data item of the dataset, with a maximum depth that can be specified with a max_depth argument (like the method print_index, that has a default value of 3). The printed output is structured by groups
Step23: This shorter print can be read easily, provide a complete and visual overview of the dataset organization, and indicate the memory size and type of each data item or Group in the dataset. The printed output distinguishes Group data items, from Nodes data item. The later regroups all types of arrays that may be stored in the HDF5 file.
Both short and long version of the print_dataset_content output can be written into a text file, if a filename is provided as value for the to_file method argument
Step24: <div class="alert alert-info">
**Note**
The string representation of the *SampleData* class is composed of a first part, which is the output of the `print_index` method, and a second part, that is the output of the `print_datase_content` method (short output).
</div>
Step25: Now you now how to get a detailed overview of the dataset content. However, with large datasets, that may have a complex internal organization (many Groups, lot of data items and metadata...), the print_dataset_content return string can become very large. In this case, it becomes cumbersome to look for a specific information on a Group or on a particular data item. For this reason, the SampleData class provides methods to only print information on one or several data items of the dataset. They are presented in the next subsections.
3- Get information on data items
To get information on a specific data item (including Groups), you may use the print_node_info method. This method has 2 arguments
Step26: You can observe that this method prints the same block of information that the one that appeared in the print_dataset_content method output, for the description of the test_image group. With this block, we can learn that this Group is a children of the root Group ('/'), that it has two children that are the data items named Field_index and test_image_field. We can also see its attributes names and values. Here they provide information on the nature of the Group, that is a 3D image group, and on the topology of this image (for instance, that it is a 9x9x9 voxel image, of size 0.2).
Let us now apply this method on a data item that is not a group, the test_array data item. The print_index function instructed us that this node has an alias name, that is test_alias. We will use it here to get information on this node, to illustrate the use of the only type of node indicator that has not been used throughout this notebook
Step27: Here, we can learn which is the Node parent, what is the node Name, see that it has no attributes, see that it is an array of shape (51,), that it is not stored with data compression (compresion level to 0), and that it occupies a disk space of 64 Kb.
The print_node_info method is usefull to get information on a specific target, and avoir dealing with the sometimes too large output returned by the print_dataset_content method.
4- Get information on Groups content
The previous subsection showed that the print_node_info method applied on Groups returns only information about the group name, metadata and children names. The SampleData class offers a method that allows to print this information, with in addition, the detailed content of each children of the target group
Step28: Obviously, this methods is identical to the print_dataset_content method, but restricted to one Group. As the first one, it has a to_file, a short and a max_depth arguments. These arguments work just as for print_dataset_content method, hence there use is not detailed here. the However, you may see one difference here. In the output printed above, we see that the test_mesh group has a Geometry children which is a group, but whose content is not printed. The print_group_content has indeed, by default, a non-recursive behavior. To get a recursive print of the group content, you must set the recursive argument to True
Step29: As you can see, the information on the childrens of the Geometry group have been printed. Note that the max_depth argument is considered by this method as an absolute depth, meaning that you have to specify a depth that is at least the depth of the target group to see some output printed for the group content. The default maximum depth for this method is set to a very high value of 1000. Hence, print_group_content prints by defaults group contents with a recursive behavior. Note also that print_group_content with the recursive option, is equivalent to print_dataset_content but prints the dataset content as if the target group was the root.
5- Get information on grids
One of the SampleData class main functionalities is the manipulation and storage of spatially organized data, which is handled by Grid groups in the data model. Because they are usually key data for mechanical sample datasets, the SampleData class provides a method to print Grouo informations only for Grid groups, the print_grids_info method
Step30: This method also has the to_file and short arguments of the print_dataset_content method
Step31: 6- Get xdmf tree content
As explained in the first Notebook of this User Guide, these grid Groups and associated data are stored in a dual format by the SampleData class. This dual format is composed of the dataset HDF5 file, and an associated XDMF file containing metadata, describing Grid groups topology, data types and fields.
The XDMF file is handled in the SampleData class by the xdmf_tree attribute, which is an instance of the lxml.etree class of the lxml package
Step32: The XDMF file is synchronized with the in-memory xdmf_tree argument when calling the sync method, or when deleting the SampleData instance. However, you may want to look at the content of the XDMF tree while you are interactively using your SampleData instance. In this case, you can use the print_xdmf method
Step33: As you can observe, you will get a print of the content of the XDMF file that would be written if you would close the file right now. You can observe that the XDMF file provides information on the grids that match those given by the Groups and Nodes attributes printed above with the previously studied method
Step34: As you can see, the default behavior of this method is to print a message indicating the Node disk size, but also to return a tuple containing the value of the disk size and its unit. If you want to print data in bytes, you may call this method with the convert argument set to False
Step35: If you want to use this method to get a numerical value within a script, but do not want the class to print anything, you can use the print_flag argument
Step36: The disk size of the whole HDF5 file can also be printed/returned, using the get_file_disk_size method, that has the same print_flag and convert arguments
Step37: 8- Get nodes/groups attributes (metadata)
Another central aspect of the SampleData class is the management of metadata, that can be attached to all Groups or Nodes of the dataset. Metadata comes in the form of HDF5 attributes, that are Name/Value pairs, and that we already encountered when exploring the outputs of methods like print_dataset_content, print_node_info...
Those methods print the Group/Node attributes together with other information. To only print the attributes of a given data item, you can use the print_node_attributes method
Step38: As you can see, this method prints a list of all data item attributes, with the format * Name
Step39: You can also get all attributes of a data item as a dictionary. In this case, you just need to specify the name of the data item from which you want attributes, and use the get_dic_from_attributes method
Step40: We have now seen how to explore all of types of information that a SampleData dataset may contain, individually or all together, interactively, from a Python console. Let us review now how to explore the content of SampleData datasets with external softwares.
IV - Visualize dataset contents with Vitables
All the information that you can get with all the methods presented in the previous section can also be accessed externally by opening the HDF5 dataset file with the Vitables software. This software is usually part of the Pytables package, that is a dependency of pymicro. You should be able to use it in a Python environement compatible with pymicro. If needed, you may refer to the Vitables website to find download and installations instructions for PyPi or conda
Step41: Please refer to the Vitables documentation, that can be downloaded here https
Step42: <div class="alert alert-info">
**Note**
**It is recommended to use a recent version of the Paraview software to visualize SampleData datasets (>= 5.0).**
When opening the XDMF file, Paraview may ask you to choose a specific file reader. It is recommended to choose the
**XDMF_reader**, and not the **Xdmf3ReaderT**, or **Xdmf3ReaderS**.
</div>
VI - Using command line tools
You can also examine the content of your HDF5 datasets with generoc HDF5 command line tools, such as h5ls or h5dump
Step43: As you can see if you uncommented and executed this cell, h5dump prints a a fully detailed description of your dataset
Step44: You can also use the command line tool of the Pytables software ptdump, that also takes as argument the HDF5 file, and has two command options, the verbose mode -v, and the detailed mode -d | Python Code:
from pymicro.core.samples import SampleData as SD
Explanation: 2 - Getting started with SampleData : Exploring dataset contents
This second User Guide tutorial will introduce you to:
create and open datasets with the SampleData class
the SampleData Naming System
how to get informations on a dataset content interactively
how to use the external software Vitables to visualize the content and organization of a dataset
how to use the Paraview software to visualize the spatially organized data stored in datasets
how to use generic HDF5 command line tools to print the content of your dataset
You will find a short summary of all methods reviewed in this tutorial at the end of this page.
<div class="alert alert-info">
**Note**
Throughout this notebook, it will be assumed that the reader is familiar with the overview of the SampleData file format and data model presented in the [previous notebook](./SampleData_Introduction.ipynb) of this User Guide.
</div>
<div class="alert alert-warning">
**Warning**
This Notebook review the methods to get information on SampleData HDF5 datsets content. Some of the methods detailed here produce very long outputs, that have been conserved in the documentation version. Reading completely the content of this output is absolutely not necessary to learn what is detailed on this page, they are just provided here as examples. So do not be afraid and fill free to scroll down quickly when you see large prints !
</div>
I - Create and Open datasets with the SampleData class
In this first section, we will see how to create SampleData datasets, or open pre-existing ones. These two operations are performed by instantiating a SampleData class object.
Before that, you will need to import the SampleData class. We will import it with the alias name SD, by executing:
Import SampleData and get help
End of explanation
data = SD(filename='my_first_dataset')
Explanation: Before starting to create our datasets, we will take a look at the SampleData class documenation, to discover the arguments of the class constructor. You can read it on the pymicro.core package API doc page, or print interactively by executing:
```python
help(SD)
or, if you are working with a Jupyter notebook, by executing the magic command:
?SD
```
Do not hesitate to systematically use the help function or the "?" magic command to get information on methods when you encounter a new one. All SampleData methods are documented with explicative docstrings, that detail the method arguments and returns.
Dataset creation
The class docstring is divided in multiple rubrics, one of them giving the list of the class constructor arguments.
Let us review them one by one.
filename: basename of the HDF5/XDMF pair of file of the dataset
This is the first and only mandatory argument of the class constructor. If this string corresponds to an existing file, the SampleData class will open these file, and create a file instance to interact with this already existing dataset. If the filename do not correspond to an existing file, the class will create a new dataset, which is what we want to do here.
Let us create a SampleData dataset:
End of explanation
import os # load python module to interact with operating system
cwd = os.getcwd() # get current directory
file_list = os.listdir(cwd) # get content of current work directory
print(file_list,'\n')
# now print only files that start with our dataset basename
print('Our dataset files:')
for file in file_list:
if file.startswith('my_first_dataset'):
print(file)
Explanation: That is it. The class has created a new HDF5/XDMF pair of files, and associated the interface with this dataset to the variable data. No message has been returned by the code, how can we know that the dataset has been created ?
When the name of the file is not an absolute path, the default behavior of the class is to create the dataset in the current work directory. Let us print the content of this directory then !
End of explanation
data.set_verbosity(True)
Explanation: The two files my_first_dataset.h5 and my_first_dataset.xdmf have indeed been created.
If you want interactive prints about the dataset creation, you can set the verbose argument to True. This will set the activate the verbose mode of the class. When it is, the class instance prints a lot of information about what it is doing. This flag can be set by using the set_verbosity method:
End of explanation
del data
Explanation: Let us now close our dataset, and see if the class instance prints information about it:
End of explanation
data = SD(filename='my_first_dataset', verbose=True)
Explanation: <div class="alert alert-info">
**Note**
It is a good practice to always delete your `SampleData` instances once you are done working with a dataset, or if you want to re-open it. As the class instance handles opened files as long as it exists, deleting it ensures that the files are properly closed. Otherwise, file may close at some random times or stay opened, and you may encounter undesired behavior of your datasets.
</div>
The class indeed returns some prints during the instance destruction. As you can see, the class instance wrights data into the pair of files, and then closes the dataset instance and the files.
Dataset opening and verbose mode
Let us now try to create a new SD instance for the same dataset file "my_first_dataset". As the dataset files (HDF5, XDMF) already exist, this new SampleData instance will open the dataset files and synchronize with them. with the verbose mode on. When activated, SampleData class instances will display messages about the actions performed by the class (creating, deleting data items for instance)
End of explanation
del data
Explanation: You can see that the printed information states that the dataset file my_first_dataset.h5 has been opened, and not created. This second instantiation of the class has not created a new dataset, but instead, has opened the one that we have just closed. Indeed, in that case, we provided a filename that already existed.
Some information about the dataset content are also printed by the class. This information can be retrived with specific methods that will be detailed in the next section of this Notebook. Let us focus for now on one part of it.
The printed info reveals that our dataset content is composed only of one object, a Group data object named /. This group is the Root Group of the dataset. Each dataset has necessarily a Root Group, automatically created along with the dataset. You can see that this Group already have a Child, named Index. This particular data object will be presented in the third section of this Notebook. You can also observe that the Root Group already has attributes (recall from introduction Notebook that they are Name/Value pairs used to store metadata in datasets). Two of those attributes match arguments of the SampleData class constructor:
the description attribute
the sample_name attribute
The description and sample_name are not modified in the dataset when reading a dataset. These SD constructor arguments are only used when creating a dataset. They are string metadata whose role is to give a general name/title to the dataset, and a general description.
However, they can be set or changed after the dataset creation with the methods set_sample_name and set_description, used a little further in this Notebook.
Now we know how to open a dataset previously created with SampleData. We could want to open a new dataset, with the name of an already existing data, but overwrite it. The SampleData constructor allows to do that, and we will see it in the next subsection. But first, we will close our dataset again:
End of explanation
data = SD(filename='my_first_dataset', verbose=True, overwrite_hdf5=True)
Explanation: Overwriting datasets
The overwrite_hdf5 argument of the class constructor, if it is set to True, will remove the filename dataset and create a new empty one, if this dataset already exists:
End of explanation
del data
Explanation: As you can see, the dataset files have been overwritten, as requested. We will now close our dataset again and continue to see the possibilities offered by the class constructor.
End of explanation
data2 = SD.copy_sample(src_sample_file='my_first_dataset', dst_sample_file='dataset_copy', get_object=True)
cwd = os.getcwd() # get current directory
file_list = os.listdir(cwd) # get content of current work directory
print(file_list,'\n')
# now print only files that start with our dataset basename
print('Our dataset files:')
for file in file_list:
if file.startswith('dataset_copy'):
print(file)
Explanation: Copying dataset
One last thing that may be interesting to do with already existing dataset files, is to create a new dataset that is a copy of them, associated with a new class instance. This is usefull for instance when you have to try new processing on a set of valuable data, without risking to damage the data.
To do this, you may use the copy_sample method of the SampleData class. Its main arguments are:
src_sample_file: basename of the dataset files to copy (source file)
dst_sample_file: basename of the dataset to create as a copy of the source (desctination file)
get_object: if False, the method will just create the new dataset files and close them. If True, the method will leave the files open and return a SampleData instance that you may use to interact with your new dataset.
Let us try to create a copy of our first dataset:
End of explanation
# set the autodelete argument to True
data2.autodelete = True
# Set the verbose mode on for copied dataset
data2.set_verbosity(True)
# Close copied dataset
del data2
Explanation: The copy_dataset HDF5 and XDMF files have indeed been created, and are a copy of the my_first_dataset HDF5 and XDMF files.
Note that the copy_sample is a static method, that can be called even without SampleData instance. Note also that it has a overwrite argument, that allows to overwrite an already existing dst_sample_file. It also has, like the class constructor, a autodelete argument, that we will discover in the next subsection.
Automatically removing dataset files
In some occasions, we may want to remove our dataset files after using our SampleData class instance. This can be the case for instance if you are trying some new data processing, or using the class for visualization purposes, and are not interested in keeping your test data.
The class has a autodelete attribute for this purpose. IF it is set to True, the class destructor will remove the dataset file pair in addition to deleting the class instance. The class constructor and the copy_sample method also have a autodelete argument, which, if True, will automatically set the class instance autodelete attribute to True.
To illustrate this feature, we will try to change the autodelete attribute of our copied dataset to True, and remove it.
End of explanation
file_list = os.listdir(cwd) # get content of current work directory
print(file_list,'\n')
# now print only files that start with our dataset basename
print('Our copied dataset files:')
for file in file_list:
if file.startswith('dataset_copy'):
print(file)
Explanation: The class destructor ends by priting a confirmation message of the dataset files removal in verbose mode, as you can see in the cell above.
Let us verify that it has been effectively deleted:
End of explanation
data = SD(filename='my_first_dataset', verbose=True, autodelete=True)
print(f'Is autodelete mode on ? {data.autodelete}')
del data
file_list = os.listdir(cwd) # get content of current work directory
print(file_list,'\n')
# now print only files that start with our dataset basename
print('Our dataset files:')
for file in file_list:
if file.startswith('my_first_dataset'):
print(file)
Explanation: As you can see, the dataset files have been suppressed. Now we can also open and remove our first created dataset using the class constructor autodelete option:
End of explanation
from config import PYMICRO_EXAMPLES_DATA_DIR # import file directory path
import os
dataset_file = os.path.join(PYMICRO_EXAMPLES_DATA_DIR, 'test_sampledata_ref') # test dataset file path
data = SD(filename=dataset_file)
Explanation: Now, you now how to create or open SampleData datasets. Before starting to explore their content in detail, a last feature of the SampleData class must be introduced: the naming system and conventions used to create or access data items in datasets.
<div class="alert alert-info">
**Note**
Using the **autodelete** option is usefull when you want are using the class for tries, or tests, and do not want to keep the dataset files on your computer. It is also a proper way to remove a SampleData dataset, as it allows to remove both files in one time.
</div>
II - The SampleData Naming system
SampleData datasets are composed of a set of organized data items. When handling datasets, you will need to specify which item you want to interact with or create. The SampleData class provides 4 different ways to refer to datasets. The first type of data item identificator is :
the Path of the data item in the HDF5 file.
Like a file within a filesystem has a path, HDF5 data items have a Path within the dataset. Each data item is the children of a HDF5 Group (analogous to a file contained in a directory), and each Group may also be children of a Group (analogous to a directory contained in a directory). The origin directory is called the root group, and has the path '/'. The Path offers a completely non-ambiguous way to designate a data item within the dataset, as it is unique. A typical path of a dataitem will look like that : /Parent_Group1/Parent_Group2/ItemName. However, pathes can become very long strings, and are usually not a convenient way to name data items. For that reason, you also can refer to them in SampleData methods using:
the Name of the data item.
It is the last element of its Path, that comes after the last / character. For a dataset that has the path /Parent_Group1/Parent_Group2/ItemName, the dataset Name is ItemName. It allows to refer quickly to the data item without writing its whole Path.
However, note that two different datasets may have the same Name (but different pathes), and thus it may be necessary to use additional names to refer to them with no ambiguity without having to write their full path. In addition, it may be convenient to be able to use, in addition to its storage name, one or more additional and meaningfull names to designate a data item. For these reasons, two additional identificators can be used:
the Indexname of the data item
the Alias or aliases of the data item
Those two types of indentificators are strings that can be used as additional data item Names. They play completely similar roles. The Indexname is also used in the dataset Index (see below), that gather the data item indexnames together with their pathes within the dataset. All data items must have an Indexname, that can be identical to their Name. If additionnal names are given to a dataset, they are stored as an Alias.
Many SampleData methods have a nodename or name argument. Everytime you will encounter it, you may use one of the 4 identificators presented in this section, to provide the name of the dataset you want to create or interact with. Many examples will follow in the rest of this Notebook, and of this User Guide.
Let us now move on to discover the methods that allow to explore the datasets content.
III- Interactively get information on datasets content
The goal of this section is to review the various way to get interactive information on your SampleData dataset (interactive in the sens that you can get them by executing SampleData class methods calls into a Python interpreter console).
For this purpose, we will use a pre-existing dataset that already has some data stored, and look into its content. This dataset is a reference SampleData dataset used for the core package unit tests.
End of explanation
data.content_index
Explanation: 1- The Dataset Index
As explained in the previous section, all data items have a Path, and an Indexname. The collection of Indexname/Path pairs forms the Index of the dataset. For each SampleData dataset, an Index Group is stored in the root Group, and the collection of those pairs is stored as attributes of this Index Group. Additionnaly, a class attribute content_index stores them as a dictionary in the clas instance, and allows to access them easily. The dictionary and the Index Group attributes are automatically synchronized by the class.
Let us see if we can see the dictionary content:
End of explanation
data.aliases
Explanation: You should see the dictionary keys that are names of data items, and associated values, that are hdf5 pathes. You can see also data item Names at the end of their Pathes. The data item aliases are also stored in a dictionary, that is an attribute of the class, named aliases:
End of explanation
data.print_index()
Explanation: You can see that this dictionary contains keys only for data item that have additional names, and also that those keys are the data item indexnames.
The dataset index can be plotted together with the aliases, with a prettier aspect, by calling the method print_index:
End of explanation
data.print_index(local_root="/test_image")
Explanation: This method prints the content of the dataset Index, with a given depth and from a specific root. The depth is the number of parents that a data item has. The root Group has thus a depth of 0, its children a depth of 1, the children of its children a depth of 2, and so on... The local root argument can be changed, to print only the Index for data items that are children of a specific group. When used without arguments, print_index uses a depth of 3 and the dataset root as default settings.
As you can see, our dataset already contains some data items. We can already identify at least 3 HDF5 Groups (test_group, test_image, test_group), as they have childrens, and a lot of other data items.
Let us try different to print Indexes with different parameters. To start, Let us try to print the Index from a different local root, for instance the group with the path /test_image. The way to do it is to use the local_root argument. We will hence give it the value of the /test_image path.
End of explanation
data.print_index(local_root="test_image")
data.print_index(local_root="image")
Explanation: The print_index method local root arguments needs the name of the Group whose children Index must be printed. As explained in section II, you may use for this other identificators than its Path. Let us try its Name (last part of its path), which is test_image, or its Indexname, which is image:
End of explanation
data.print_index(max_depth=2)
Explanation: As you can see, the result is the same in the 3 cases.
Let us now try to print the dataset Index with a maximal data item depth of 2, using the max_depth argument:
End of explanation
data.print_index(max_depth=2, local_root='mesh')
Explanation: Of course, you can combine those two arguments:
End of explanation
data.print_dataset_content()
Explanation: The print_index method is usefull to get a glimpse of the content and organization of the whole dataset, or some part of it, and to quickly see the short indexnames or aliases that you can use to refer to data items.
To add aliases to data items or Groups, you can use the add_alias method.
The Index allows to quickly see the internal structure of your dataset, however, it does not provide detailed information on the data items. We will now see how to retrieve it with the SampleData class.
2- The Dataset content
The SampleData class provides a method to print an organized and detailed overview of the data items in the dataset, the print_dataset_content method. Let us see what the methods prints when called with no arguments:
End of explanation
data.print_dataset_content(short=True)
Explanation: As you can see, this method prints by increasing depth, detailed information on each Group and each data item of the dataset, with a maximum depth that can be specified with a max_depth argument (like the method print_index, that has a default value of 3). The printed output is structured by groups: each Group that has childrenis described by a first set of information, followed by a Group CONTENT string that describes all of its childrens.
For each data item, or Group, the method prints their name, path, type, attributes, content, compression settings and memory size if it is an array, children names if it is a Group. Hence, when calling this method, you can see the content and organization of the dataset, all the metadata attached to all data items, and the disk size occupied by each data item. As you progress through this tutorial, you will learn the meaning of those informations for all types of SampleData data items.
The print_dataset_content method has a short boolean argument, that allows to plot a condensed string representation of the dataset:
End of explanation
data.print_dataset_content(short=True, to_file='dataset_information.txt')
# Let us open the content of the created file, to see if the dataset information has been written in it:
%cat dataset_information.txt
Explanation: This shorter print can be read easily, provide a complete and visual overview of the dataset organization, and indicate the memory size and type of each data item or Group in the dataset. The printed output distinguishes Group data items, from Nodes data item. The later regroups all types of arrays that may be stored in the HDF5 file.
Both short and long version of the print_dataset_content output can be written into a text file, if a filename is provided as value for the to_file method argument:
End of explanation
# SampleData string representation :
print(data)
Explanation: <div class="alert alert-info">
**Note**
The string representation of the *SampleData* class is composed of a first part, which is the output of the `print_index` method, and a second part, that is the output of the `print_datase_content` method (short output).
</div>
End of explanation
# Method called with data item indexname, and short output
data.print_node_info(nodename='image', short=True)
# Method called with data item Path and long output
data.print_node_info(nodename='/test_image', short=False)
Explanation: Now you now how to get a detailed overview of the dataset content. However, with large datasets, that may have a complex internal organization (many Groups, lot of data items and metadata...), the print_dataset_content return string can become very large. In this case, it becomes cumbersome to look for a specific information on a Group or on a particular data item. For this reason, the SampleData class provides methods to only print information on one or several data items of the dataset. They are presented in the next subsections.
3- Get information on data items
To get information on a specific data item (including Groups), you may use the print_node_info method. This method has 2 arguments: the name argument, and the short argument. As explain in section II, the name argument can be one of the 4 possible identifier that the target node can have (name, path, indexname or alias). The short argument has the same effect on the printed output as for the print_dataset_content method. Let us look at some examples. Its default value is False, i.e. the detailed output.
First, we will for instance want to have information on the Image Group that is stored in the dataset. The print_index and short print_dataset_content allowed us to see that this group has the name test_image, the indexname image, and the path /test_image. We will call the method with two of those identificators, and with the two possible values of the short argument.
End of explanation
data.print_node_info('test_alias')
Explanation: You can observe that this method prints the same block of information that the one that appeared in the print_dataset_content method output, for the description of the test_image group. With this block, we can learn that this Group is a children of the root Group ('/'), that it has two children that are the data items named Field_index and test_image_field. We can also see its attributes names and values. Here they provide information on the nature of the Group, that is a 3D image group, and on the topology of this image (for instance, that it is a 9x9x9 voxel image, of size 0.2).
Let us now apply this method on a data item that is not a group, the test_array data item. The print_index function instructed us that this node has an alias name, that is test_alias. We will use it here to get information on this node, to illustrate the use of the only type of node indicator that has not been used throughout this notebook:
End of explanation
data.print_group_content(groupname='test_mesh')
Explanation: Here, we can learn which is the Node parent, what is the node Name, see that it has no attributes, see that it is an array of shape (51,), that it is not stored with data compression (compresion level to 0), and that it occupies a disk space of 64 Kb.
The print_node_info method is usefull to get information on a specific target, and avoir dealing with the sometimes too large output returned by the print_dataset_content method.
4- Get information on Groups content
The previous subsection showed that the print_node_info method applied on Groups returns only information about the group name, metadata and children names. The SampleData class offers a method that allows to print this information, with in addition, the detailed content of each children of the target group: the print_group_content method.
Let us try it on the Mesh group of our test dataset:
End of explanation
data.print_group_content('test_mesh', recursive=True)
Explanation: Obviously, this methods is identical to the print_dataset_content method, but restricted to one Group. As the first one, it has a to_file, a short and a max_depth arguments. These arguments work just as for print_dataset_content method, hence there use is not detailed here. the However, you may see one difference here. In the output printed above, we see that the test_mesh group has a Geometry children which is a group, but whose content is not printed. The print_group_content has indeed, by default, a non-recursive behavior. To get a recursive print of the group content, you must set the recursive argument to True:
End of explanation
data.print_grids_info()
Explanation: As you can see, the information on the childrens of the Geometry group have been printed. Note that the max_depth argument is considered by this method as an absolute depth, meaning that you have to specify a depth that is at least the depth of the target group to see some output printed for the group content. The default maximum depth for this method is set to a very high value of 1000. Hence, print_group_content prints by defaults group contents with a recursive behavior. Note also that print_group_content with the recursive option, is equivalent to print_dataset_content but prints the dataset content as if the target group was the root.
5- Get information on grids
One of the SampleData class main functionalities is the manipulation and storage of spatially organized data, which is handled by Grid groups in the data model. Because they are usually key data for mechanical sample datasets, the SampleData class provides a method to print Grouo informations only for Grid groups, the print_grids_info method:
End of explanation
data.print_grids_info(short=True, to_file='dataset_information.txt')
%cat dataset_information.txt
Explanation: This method also has the to_file and short arguments of the print_dataset_content method:
End of explanation
data.xdmf_tree
Explanation: 6- Get xdmf tree content
As explained in the first Notebook of this User Guide, these grid Groups and associated data are stored in a dual format by the SampleData class. This dual format is composed of the dataset HDF5 file, and an associated XDMF file containing metadata, describing Grid groups topology, data types and fields.
The XDMF file is handled in the SampleData class by the xdmf_tree attribute, which is an instance of the lxml.etree class of the lxml package:
End of explanation
data.print_xdmf()
Explanation: The XDMF file is synchronized with the in-memory xdmf_tree argument when calling the sync method, or when deleting the SampleData instance. However, you may want to look at the content of the XDMF tree while you are interactively using your SampleData instance. In this case, you can use the print_xdmf method:
End of explanation
data.get_node_disk_size(nodename='test_array')
Explanation: As you can observe, you will get a print of the content of the XDMF file that would be written if you would close the file right now. You can observe that the XDMF file provides information on the grids that match those given by the Groups and Nodes attributes printed above with the previously studied method: the test image is a regular grid of 10x10x10 nodes, i.e. a 9x9x9 voxels grid. Only one field is defined on test_image, test_image_field, whereas two are defined on test_mesh.
This XDMF file can directly be opened in Paraview, if both file are closed. If any syntax or formatting issue is encountered when Paraview reads the XDMF file, it will return an error message and the data visualization will not be rendered. The print_xdmf method allows you to verify your XDMF data and syntax, to make sure that the data formatting is correct.
7- Get memory size of file and data items
SampleData is designed to create large datasets, with data items that can reprensent tens Gb of data or more. Being able to easily see and identify which data items use the most disk space is a crucial aspect for data management. Until now, with the method we have reviewed, we only have been able to print the Nodes disk sizes together with a lot of other information. In order to speed up this process, the SampleData class has one method that allow to directly query and print only the memory size of a Node, the get_node_disk_size method:
End of explanation
data.get_node_disk_size(nodename='test_array', convert=False)
Explanation: As you can see, the default behavior of this method is to print a message indicating the Node disk size, but also to return a tuple containing the value of the disk size and its unit. If you want to print data in bytes, you may call this method with the convert argument set to False:
End of explanation
size, unit = data.get_node_disk_size(nodename='test_array', print_flag=False)
print(f'Printed by script: node size is {size} {unit}')
size, unit = data.get_node_disk_size(nodename='test_array', print_flag=False, convert=False)
print(f'Printed by script: node size is {size} {unit}')
Explanation: If you want to use this method to get a numerical value within a script, but do not want the class to print anything, you can use the print_flag argument:
End of explanation
data.get_file_disk_size()
size, unit = data.get_file_disk_size(convert=False, print_flag=False)
print(f'\nPrinted by script: file size is {size} {unit}')
Explanation: The disk size of the whole HDF5 file can also be printed/returned, using the get_file_disk_size method, that has the same print_flag and convert arguments:
End of explanation
data.print_node_attributes(nodename='test_mesh')
Explanation: 8- Get nodes/groups attributes (metadata)
Another central aspect of the SampleData class is the management of metadata, that can be attached to all Groups or Nodes of the dataset. Metadata comes in the form of HDF5 attributes, that are Name/Value pairs, and that we already encountered when exploring the outputs of methods like print_dataset_content, print_node_info...
Those methods print the Group/Node attributes together with other information. To only print the attributes of a given data item, you can use the print_node_attributes method:
End of explanation
Nnodes = data.get_attribute(attrname='number_of_nodes', nodename='test_mesh')
print(f'The mesh test_mesh has {Nnodes} nodes')
Explanation: As you can see, this method prints a list of all data item attributes, with the format * Name : Value \n.
It allows you to quickly see what attributes are stored together with a given data item, and their values.
If you want to get the value of a specific attribute, you can use the get_attribute method. It takes two arguments, the name of the attribute you want to retrieve, and the name of the data item where it is stored:
End of explanation
mesh_attrs = data.get_dic_from_attributes(nodename='test_mesh')
for name, value in mesh_attrs.items():
print(f' Attribute {name} is {value}')
Explanation: You can also get all attributes of a data item as a dictionary. In this case, you just need to specify the name of the data item from which you want attributes, and use the get_dic_from_attributes method:
End of explanation
# uncomment to test
# data.pause_for_visualization(Vitables=True, Vitables_path='Path_to_Vitables_executable')
Explanation: We have now seen how to explore all of types of information that a SampleData dataset may contain, individually or all together, interactively, from a Python console. Let us review now how to explore the content of SampleData datasets with external softwares.
IV - Visualize dataset contents with Vitables
All the information that you can get with all the methods presented in the previous section can also be accessed externally by opening the HDF5 dataset file with the Vitables software. This software is usually part of the Pytables package, that is a dependency of pymicro. You should be able to use it in a Python environement compatible with pymicro. If needed, you may refer to the Vitables website to find download and installations instructions for PyPi or conda: https://vitables.org/.
Vitables provide a graphical interface that allows you to browse through all your dataset data items, and access or modify their stored data and metadata values. You may either open Vitables and then open your HDF5 dataset file from the Vitables interface, or you can directly open Vitables to read a specfic file from command line, by running:
vitables my_dataset_path.h5.
This command will work only if your dataset file is closed (if the SampleData instance still exists in your Python console, this will not work, you first need to delete your instance to close the files).
However, the SampleData class has a specific method to allowing to open your dataset with Vitables interactively, directly from your Python console: the method pause_for_visualization. As explained just above, this method closes the XDMF and HDF5 datasets, and runs in your shell the command vitables my_dataset_path.h5. Then, it freezes the interactive Python console and keep the dataset files closed, for as long as the Vitables software is running. When Vitables is shutdown, the SampleData class will reopen the HDF5 and XDMF files, synchronize with them and resume the interactive Python console.
<div class="alert alert-warning">
**Warning**
When calling the `pause_for_visualization` method from a python console (ipython, Jupyter...), you may face environment issue leading to your shell not finding the proper *Vitables* software executable. To ensure that the right *Vitables* is found, the method can take an optional argument `Vitables_path`, which must be the path of the *Vitables* executable. If this argument is passed, the method will run, after closing the HDF5 and XDMF files, the command
`Vitables_path my_dataset_path.hdf5`
</div>
<div class="alert alert-info">
**Note**
The method is not called here to allow automatic execution of the Notebook when building the documentation on a platform that do not have Vitables available.
</div>
End of explanation
# Like for Vitables --> uncomment to test
# data.pause_for_visualization(Paraview=True, Paraview_path='Path_to_Paraview_executable')
Explanation: Please refer to the Vitables documentation, that can be downloaded here https://sourceforge.net/projects/vitables/files/ViTables-3.0.0/, to learn how to browse through your HDF5 file. The Vitables software is very intuitive, you will see that it is provides a usefull and convenient tool to explore your SampleData datasets outside of your interactive Python consoles.
V - Visualize datasets grids and fields with Paraview
As for Vitables, the pause_for_visualization method allows you to open your dataset with Paraview, interactively from a Python console.
Paraview will provide you with a very powerfull visualization tool to render your spatially organized data (grids) stored in your datasets. Unlike Vitables, Paraview must can read the XDMF format. Hence, if you want to open your dataset with Paraview, outside of a Python console, make sure that the HFD5 and XDMF file are not opened by another program, and run in your shell the command:
paraview my_dataset_path.xdmf.
As you may have guessed, the pause_for_visualization method, when called interactively with the Paraview argument set to True, will close both files, and run this command, just like for the Vitables option. The datasets will remained closed and the Python console freezed for as long as you will keep the Paraview software running. When you will shutdown Paraview, the SampleData class will reopen the HDF5 and XDMF files, synchronize with them and resume the interactive Python console.
<div class="alert alert-warning">
**Warning**
When calling the `pause_for_visualization` method from a python console (ipython, Jupyter...), you may face environment issue leading to your shell not finding the proper *Paraview* software executable. To ensure that the right *Paraview* is found, the method can take an optional argument `Paraview_path`, which must be the path of the Vitables executable. If this argument is passed, the method will run, after closing the HDF5 and XDMF files, the command
`Paraview_path my_dataset_path.xdmf`
</div>
End of explanation
del data
# raw output of H5ls --> prints the childrens of the file root group
!h5ls ../data/test_sampledata_ref.h5
# recursive output of h5ls (-r option) --> prints all data items
!h5ls -r ../data/test_sampledata_ref.h5
# recursive (-r) and detailed (-d) output of h5ls --> also print the content of the data arrays
# !h5ls -rd ../data/test_sampledata_ref.h5
# output of h5dump:
# !h5dump ../data/test_sampledata_ref.h5
Explanation: <div class="alert alert-info">
**Note**
**It is recommended to use a recent version of the Paraview software to visualize SampleData datasets (>= 5.0).**
When opening the XDMF file, Paraview may ask you to choose a specific file reader. It is recommended to choose the
**XDMF_reader**, and not the **Xdmf3ReaderT**, or **Xdmf3ReaderS**.
</div>
VI - Using command line tools
You can also examine the content of your HDF5 datasets with generoc HDF5 command line tools, such as h5ls or h5dump:
<div class="alert alert-warning">
**Warning**
In the following, executable programs that come with the HDF5 library and the Pytables package are used. If you are executing this notebook with Jupyter, you may not be able to have those executable in your path, if your environment is not suitably set. A workaround consist in finding the absolute path of the executable, and replacing the executable name in the following cells by its full path. For instance, replace
`ptdump file.h5`
with
`/full/path/to/ptdump file.h5`
To find this full path, you can run in your shell the command `which ptdump`. Of course, the same applies for `h5ls` and `h5dump`.
</div>
<div class="alert alert-info">
**Note**
Most code lines below are commented as they produce very large outputs, that otherwise pollute the documentation if they are included in the automatic build process. Uncomment them to test them if you are using interactively these notebooks !
</div>
For that, you must first close your dataset. If you don't, this tools will not be able to open the HDF5 file as it is opened by the SampleData class in the Python interpretor.
End of explanation
# !h5dump ../data/test_sampledata_ref.h5 > test_dump.txt
# !cat test_dump.txt
Explanation: As you can see if you uncommented and executed this cell, h5dump prints a a fully detailed description of your dataset: organization, data types, item names and path, and item content (value stored in arrays). As it produces a very large output, it may be convenient to write its output in a file:
End of explanation
# uncomment to test !
# !ptdump ../data/test_sampledata_ref.h5
# uncomment to test!
# !ptdump -v ../data/test_sampledata_ref.h5
# uncomment to test !
# !ptdump -d ../data/test_sampledata_ref.h5
Explanation: You can also use the command line tool of the Pytables software ptdump, that also takes as argument the HDF5 file, and has two command options, the verbose mode -v, and the detailed mode -d:
End of explanation |
11,897 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction to optimization
The basic components
The objective function (also called the 'cost' function)
Step1: The "optimizer"
Step2: Additional components
"Box" constraints
Step3: The gradient and/or hessian
Step4: The penalty functions
$\psi(x) = f(x) + k*p(x)$
Step5: Optimizer classifications
Constrained versus unconstrained (and importantly LP and QP)
Step6: Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took. scipy.optimize has a version of this, with options={'retall'
Step7: Gradient descent and steepest descent
Genetic and stochastic
Not covered
Step8: Not Covered
Step9: Parameter estimation
Step10: Standard diagnostic tools
Eyeball the plotted solution against the objective
Run several times and take the best result
Log of intermediate results, per iteration
Rare | Python Code:
import numpy as np
objective = np.poly1d([1.3, 4.0, 0.6])
print objective
Explanation: Introduction to optimization
The basic components
The objective function (also called the 'cost' function)
End of explanation
import scipy.optimize as opt
x_ = opt.fmin(objective, [3])
print "solved: x={}".format(x_)
%matplotlib inline
x = np.linspace(-4,1,101.)
import matplotlib.pylab as mpl
mpl.plot(x, objective(x))
mpl.plot(x_, objective(x_), 'ro')
Explanation: The "optimizer"
End of explanation
import scipy.special as ss
import scipy.optimize as opt
import numpy as np
import matplotlib.pylab as mpl
x = np.linspace(2, 7, 200)
# 1st order Bessel
j1x = ss.j1(x)
mpl.plot(x, j1x)
# use scipy.optimize's more modern "results object" interface
result = opt.minimize_scalar(ss.j1, method="bounded", bounds=[2, 4])
j1_min = ss.j1(result.x)
mpl.plot(result.x, j1_min,'ro')
Explanation: Additional components
"Box" constraints
End of explanation
import mystic.models as models
print(models.rosen.__doc__)
!mystic_model_plotter.py mystic.models.rosen -f -d -x 1 -b "-3:3:.1, -1:5:.1, 1"
import mystic
mystic.model_plotter(mystic.models.rosen, fill=True, depth=True, scale=1, bounds="-3:3:.1, -1:5:.1, 1")
import scipy.optimize as opt
import numpy as np
# initial guess
x0 = [1.3, 1.6, -0.5, -1.8, 0.8]
result = opt.minimize(opt.rosen, x0)
print result.x
# number of function evaluations
print result.nfev
# again, but this time provide the derivative
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print result.x
# number of function evaluations and derivative evaluations
print result.nfev, result.njev
print ''
# however, note for a different x0...
for i in range(5):
x0 = np.random.randint(-20,20,5)
result = opt.minimize(opt.rosen, x0, jac=opt.rosen_der)
print "{} @ {} evals".format(result.x, result.nfev)
Explanation: The gradient and/or hessian
End of explanation
# http://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#tutorial-sqlsp
'''
Maximize: f(x) = 2*x0*x1 + 2*x0 - x0**2 - 2*x1**2
Subject to: x0**3 - x1 == 0
x1 >= 1
'''
import numpy as np
def objective(x, sign=1.0):
return sign*(2*x[0]*x[1] + 2*x[0] - x[0]**2 - 2*x[1]**2)
def derivative(x, sign=1.0):
dfdx0 = sign*(-2*x[0] + 2*x[1] + 2)
dfdx1 = sign*(2*x[0] - 4*x[1])
return np.array([ dfdx0, dfdx1 ])
# unconstrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,),
jac=derivative, method='SLSQP', options={'disp': True})
print("unconstrained: {}".format(result.x))
cons = ({'type': 'eq',
'fun' : lambda x: np.array([x[0]**3 - x[1]]),
'jac' : lambda x: np.array([3.0*(x[0]**2.0), -1.0])},
{'type': 'ineq',
'fun' : lambda x: np.array([x[1] - 1]),
'jac' : lambda x: np.array([0.0, 1.0])})
# constrained
result = opt.minimize(objective, [-1.0,1.0], args=(-1.0,), jac=derivative,
constraints=cons, method='SLSQP', options={'disp': True})
print("constrained: {}".format(result.x))
Explanation: The penalty functions
$\psi(x) = f(x) + k*p(x)$
End of explanation
# from scipy.optimize.minimize documentation
'''
**Unconstrained minimization**
Method *Nelder-Mead* uses the Simplex algorithm [1]_, [2]_. This
algorithm has been successful in many applications but other algorithms
using the first and/or second derivatives information might be preferred
for their better performances and robustness in general.
Method *Powell* is a modification of Powell's method [3]_, [4]_ which
is a conjugate direction method. It performs sequential one-dimensional
minimizations along each vector of the directions set (`direc` field in
`options` and `info`), which is updated at each iteration of the main
minimization loop. The function need not be differentiable, and no
derivatives are taken.
Method *CG* uses a nonlinear conjugate gradient algorithm by Polak and
Ribiere, a variant of the Fletcher-Reeves method described in [5]_ pp.
120-122. Only the first derivatives are used.
Method *BFGS* uses the quasi-Newton method of Broyden, Fletcher,
Goldfarb, and Shanno (BFGS) [5]_ pp. 136. It uses the first derivatives
only. BFGS has proven good performance even for non-smooth
optimizations. This method also returns an approximation of the Hessian
inverse, stored as `hess_inv` in the OptimizeResult object.
Method *Newton-CG* uses a Newton-CG algorithm [5]_ pp. 168 (also known
as the truncated Newton method). It uses a CG method to the compute the
search direction. See also *TNC* method for a box-constrained
minimization with a similar algorithm.
Method *Anneal* uses simulated annealing, which is a probabilistic
metaheuristic algorithm for global optimization. It uses no derivative
information from the function being optimized.
Method *dogleg* uses the dog-leg trust-region algorithm [5]_
for unconstrained minimization. This algorithm requires the gradient
and Hessian; furthermore the Hessian is required to be positive definite.
Method *trust-ncg* uses the Newton conjugate gradient trust-region
algorithm [5]_ for unconstrained minimization. This algorithm requires
the gradient and either the Hessian or a function that computes the
product of the Hessian with a given vector.
**Constrained minimization**
Method *L-BFGS-B* uses the L-BFGS-B algorithm [6]_, [7]_ for bound
constrained minimization.
Method *TNC* uses a truncated Newton algorithm [5]_, [8]_ to minimize a
function with variables subject to bounds. This algorithm uses
gradient information; it is also called Newton Conjugate-Gradient. It
differs from the *Newton-CG* method described above as it wraps a C
implementation and allows each variable to be given upper and lower
bounds.
Method *COBYLA* uses the Constrained Optimization BY Linear
Approximation (COBYLA) method [9]_, [10]_, [11]_. The algorithm is
based on linear approximations to the objective function and each
constraint. The method wraps a FORTRAN implementation of the algorithm.
Method *SLSQP* uses Sequential Least SQuares Programming to minimize a
function of several variables with any combination of bounds, equality
and inequality constraints. The method wraps the SLSQP Optimization
subroutine originally implemented by Dieter Kraft [12]_. Note that the
wrapper handles infinite values in bounds by converting them into large
floating values.
'''
import scipy.optimize as opt
# constrained: linear (i.e. A*x + b)
print opt.cobyla.fmin_cobyla
print opt.linprog
# constrained: quadratic programming (i.e. up to x**2)
print opt.fmin_slsqp
# http://cvxopt.org/examples/tutorial/lp.html
'''
minimize: f = 2*x0 + x1
subject to:
-x0 + x1 <= 1
x0 + x1 >= 2
x1 >= 0
x0 - 2*x1 <= 4
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
A = cvx.matrix([ [-1.0, -1.0, 0.0, 1.0], [1.0, -1.0, -1.0, -2.0] ])
b = cvx.matrix([ 1.0, -2.0, 0.0, 4.0 ])
cost = cvx.matrix([ 2.0, 1.0 ])
sol = cvx_solvers.lp(cost, A, b)
print(sol['x'])
# http://cvxopt.org/examples/tutorial/qp.html
'''
minimize: f = 2*x1**2 + x2**2 + x1*x2 + x1 + x2
subject to:
x1 >= 0
x2 >= 0
x1 + x2 == 1
'''
import cvxopt as cvx
from cvxopt import solvers as cvx_solvers
Q = 2*cvx.matrix([ [2, .5], [.5, 1] ])
p = cvx.matrix([1.0, 1.0])
G = cvx.matrix([[-1.0,0.0],[0.0,-1.0]])
h = cvx.matrix([0.0,0.0])
A = cvx.matrix([1.0, 1.0], (1,2))
b = cvx.matrix(1.0)
sol = cvx_solvers.qp(Q, p, G, h, A, b)
print(sol['x'])
Explanation: Optimizer classifications
Constrained versus unconstrained (and importantly LP and QP)
End of explanation
import scipy.optimize as opt
# probabilstic solvers, that use random hopping/mutations
print opt.differential_evolution
print opt.basinhopping
print opt.anneal
import scipy.optimize as opt
# bounds instead of an initial guess
bounds = [(-10., 10)]*5
for i in range(10):
result = opt.differential_evolution(opt.rosen, bounds)
print result.x,
# number of function evaluations
print '@ {} evals'.format(result.nfev)
Explanation: Notice how much nicer it is to see the optimizer "trajectory". Now, instead of a single number, we have the path the optimizer took. scipy.optimize has a version of this, with options={'retall':True}, which returns the solver trajectory.
EXERCISE: Solve the constrained programming problem by any of the means above.
Minimize: f = -1x[0] + 4x[1]
Subject to: -3x[0] + 1x[1] <= 6
1x[0] + 2x[1] <= 4
x[1] >= -3
where: -inf <= x[0] <= inf
Local versus global
End of explanation
import scipy.optimize as opt
import scipy.stats as stats
import numpy as np
# Define the function to fit.
def function(x, a, b, f, phi):
result = a * np.exp(-b * np.sin(f * x + phi))
return result
# Create a noisy data set around the actual parameters
true_params = [3, 2, 1, np.pi/4]
print "target parameters: {}".format(true_params)
x = np.linspace(0, 2*np.pi, 25)
exact = function(x, *true_params)
noisy = exact + 0.3*stats.norm.rvs(size=len(x))
# Use curve_fit to estimate the function parameters from the noisy data.
initial_guess = [1,1,1,1]
estimated_params, err_est = opt.curve_fit(function, x, noisy, p0=initial_guess)
print "solved parameters: {}".format(estimated_params)
# err_est is an estimate of the covariance matrix of the estimates
print "covarance: {}".format(err_est.diagonal())
import matplotlib.pylab as mpl
mpl.plot(x, noisy, 'ro')
mpl.plot(x, function(x, *estimated_params))
Explanation: Gradient descent and steepest descent
Genetic and stochastic
Not covered: other exotic types
Other important special cases:
Least-squares fitting
End of explanation
import numpy as np
import scipy.optimize as opt
def system(x,a,b,c):
x0, x1, x2 = x
eqs= [
3 * x0 - np.cos(x1*x2) + a, # == 0
x0**2 - 81*(x1+0.1)**2 + np.sin(x2) + b, # == 0
np.exp(-x0*x1) + 20*x2 + c # == 0
]
return eqs
# coefficients
a = -0.5
b = 1.06
c = (10 * np.pi - 3.0) / 3
# initial guess
x0 = [0.1, 0.1, -0.1]
# Solve the system of non-linear equations.
result = opt.root(system, x0, args=(a, b, c))
print "root:", result.x
print "solution:", result.fun
Explanation: Not Covered: integer programming
Typical uses
Function minimization
Data fitting
Root finding
End of explanation
import numpy as np
import scipy.stats as stats
# Create clean data.
x = np.linspace(0, 4.0, 100)
y = 1.5 * np.exp(-0.2 * x) + 0.3
# Add a bit of noise.
noise = 0.1 * stats.norm.rvs(size=100)
noisy_y = y + noise
# Fit noisy data with a linear model.
linear_coef = np.polyfit(x, noisy_y, 1)
linear_poly = np.poly1d(linear_coef)
linear_y = linear_poly(x)
# Fit noisy data with a quadratic model.
quad_coef = np.polyfit(x, noisy_y, 2)
quad_poly = np.poly1d(quad_coef)
quad_y = quad_poly(x)
import matplotlib.pylab as mpl
mpl.plot(x, noisy_y, 'ro')
mpl.plot(x, linear_y)
mpl.plot(x, quad_y)
#mpl.plot(x, y)
Explanation: Parameter estimation
End of explanation
import mystic.models as models
print models.zimmermann.__doc__
Explanation: Standard diagnostic tools
Eyeball the plotted solution against the objective
Run several times and take the best result
Log of intermediate results, per iteration
Rare: look at the covariance matrix
Issue: how can you really be sure you have the results you were looking for?
EXERCISE: Use any of the solvers we've seen thus far to find the minimum of the zimmermann function (i.e. use mystic.models.zimmermann as the objective). Use the bounds suggested below, if your choice of solver allows it.
End of explanation |
11,898 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Two coupled equilibria
Step1: First, let's define our substances. ChemPy can parse chemical formulae into chempy.Substance instances with information on charge, composition, molar mass (and pretty-printing)
Step2: Let's define som e initial concentrations and governing equilibria
Step3: We can solve the non-linear system of equations by calling the root method (the underlying representation uses pyneqsys
Step4: This system is quite easy to solve for, if we have convergence problems we can try to solve a transformed system. As an example we will use NumSysLog
Step5: In thise case they give essentially the same answer
Step6: We can create symbolic representations of these systems of equations | Python Code:
from operator import mul
from functools import reduce
from itertools import product
import chempy
from chempy.chemistry import Species, Equilibrium
from chempy.equilibria import EqSystem, NumSysLog, NumSysLin
import numpy as np
import sympy as sp
sp.init_printing()
import matplotlib.pyplot as plt
%matplotlib inline
sp.__version__, chempy.__version__
Explanation: Two coupled equilibria: protolysis of ammonia in water
In this notebook we will look at how ChemPy can be used to formulate a system of (non-linear) equations from conservation laws and equilibrium equations. We will look att ammonia since it is a fairly well-known subtance.
End of explanation
substance_names = ['H+', 'OH-', 'NH4+', 'NH3', 'H2O']
subst = {n: Species.from_formula(n) for n in substance_names}
assert [subst[n].charge for n in substance_names] == [1, -1, 1, 0, 0], "Charges of substances"
print(u'Composition of %s: %s' % (subst['NH3'].unicode_name, subst['NH3'].composition))
Explanation: First, let's define our substances. ChemPy can parse chemical formulae into chempy.Substance instances with information on charge, composition, molar mass (and pretty-printing):
End of explanation
init_conc = {'H+': 1e-7, 'OH-': 1e-7, 'NH4+': 1e-7, 'NH3': 1.0, 'H2O': 55.5}
x0 = [init_conc[k] for k in substance_names]
H2O_c = init_conc['H2O']
w_autop = Equilibrium({'H2O': 1}, {'H+': 1, 'OH-': 1}, 10**-14/H2O_c)
NH4p_pr = Equilibrium({'NH4+': 1}, {'H+': 1, 'NH3': 1}, 10**-9.26)
equilibria = w_autop, NH4p_pr
[(k, init_conc[k]) for k in substance_names]
eqsys = EqSystem(equilibria, subst)
eqsys
Explanation: Let's define som e initial concentrations and governing equilibria:
End of explanation
x, sol, sane = eqsys.root(init_conc)
x, sol['success'], sane
Explanation: We can solve the non-linear system of equations by calling the root method (the underlying representation uses pyneqsys:
End of explanation
logx, logsol, sane = eqsys.root(init_conc, NumSys=(NumSysLog,))
logx, logsol['success'], sane
Explanation: This system is quite easy to solve for, if we have convergence problems we can try to solve a transformed system. As an example we will use NumSysLog:
End of explanation
x - logx
Explanation: In thise case they give essentially the same answer:
End of explanation
ny = len(substance_names)
y = sp.symarray('y', ny)
i = sp.symarray('i', ny)
K = Kw, Ka = sp.symbols('K_w K_a')
w_autop.param = Kw
NH4p_pr.param = Ka
ss = sp.symarray('s', ny)
ms = sp.symarray('m', ny)
numsys_log = NumSysLog(eqsys, backend=sp)
f = numsys_log.f(y, list(i)+list(K))
f
numsys_lin = NumSysLin(eqsys, backend=sp)
numsys_lin.f(y, i)
A, ks = eqsys.stoichs_constants(False, backend=sp)
[reduce(mul, [b**e for b, e in zip(y, row)]) for row in A]
from pyneqsys.symbolic import SymbolicSys
subs = list(zip(i, x0)) + [(Kw, 10**-14), (Ka, 10**-9.26)]
numf = [_.subs(subs) for _ in f]
neqs = SymbolicSys(list(y), numf)
neqs.solve([0, 0, 0, 0, 0], solver='scipy')
j = sp.Matrix(1, len(f), lambda _, q: f[q]).jacobian(y)
init_conc_j = {'H+': 1e-10, 'OH-': 1e-7, 'NH4+': 1e-7, 'NH3': 1.0, 'H2O': 55.5}
xj = eqsys.as_per_substance_array(init_conc_j)
jarr = np.array(j.subs(dict(zip(y, xj))).subs({Kw: 1e-14, Ka: 10**-9.26}).subs(
dict(zip(i, xj))))
jarr = np.asarray(jarr, dtype=np.float64)
np.log10(np.linalg.cond(jarr))
j.simplify()
j
eqsys.composition_balance_vectors()
numsys_rref_log = NumSysLog(eqsys, True, True, backend=sp)
numsys_rref_log.f(y, list(i)+list(K))
np.set_printoptions(4, linewidth=120)
scaling = 1e8
for rxn in eqsys.rxns:
rxn.param = rxn.param.subs({Kw: 1e-14, Ka: 10**-9.26})
x, res, sane = eqsys.root(init_conc, rref_equil=True, rref_preserv=True)
x, res['success'], sane
x, res, sane = eqsys.root(init_conc, x0=eqsys.as_per_substance_array(
{'H+': 1e-11, 'OH-': 1e-3, 'NH4+': 1e-3, 'NH3': 1.0, 'H2O': 55.5}))
res['success'], sane
x, res, sane = eqsys.root(init_conc, x0=eqsys.as_per_substance_array(
{'H+': 1.7e-11, 'OH-': 3e-2, 'NH4+': 3e-2, 'NH3': 0.97, 'H2O': 55.5}))
res['success'], sane
init_conc
nc=60
Hp_0 = np.logspace(-3, 0, nc)
def plot_rref(**kwargs):
fig, axes = plt.subplots(2, 2, figsize=(16, 6), subplot_kw=dict(xscale='log', yscale='log'))
return [eqsys.roots(init_conc, Hp_0, 'H+', plot_kwargs={'ax': axes.flat[i]}, rref_equil=e,
rref_preserv=p, **kwargs) for i, (e, p) in enumerate(product(*[[False, True]]*2))]
res_lin = plot_rref(method='lm')
[all(_[2]) for _ in res_lin]
for col_id in range(len(substance_names)):
for i in range(1, 4):
plt.subplot(1, 3, i, xscale='log')
plt.gca().set_yscale('symlog', linthreshy=1e-14)
plt.plot(Hp_0, res_lin[0][0][:, col_id] - res_lin[i][0][:, col_id])
plt.tight_layout()
eqsys.plot_errors(res_lin[0][0], init_conc, Hp_0, 'H+')
init_conc, eqsys.ns
res_log = plot_rref(NumSys=NumSysLog)
eqsys.plot_errors(res_log[0][0], init_conc, Hp_0, 'H+')
res_log_lin = plot_rref(NumSys=(NumSysLog, NumSysLin))
eqsys.plot_errors(res_log_lin[0][0], init_conc, Hp_0, 'H+')
from chempy.equilibria import NumSysSquare
res_log_sq = plot_rref(NumSys=(NumSysLog, NumSysSquare))
eqsys.plot_errors(res_log_sq[0][0], init_conc, Hp_0, 'H+')
res_sq = plot_rref(NumSys=(NumSysSquare,), method='lm')
eqsys.plot_errors(res_sq[0][0], init_conc, Hp_0, 'H+')
x, res, sane = eqsys.root(x0, NumSys=NumSysLog, rref_equil=True, rref_preserv=True)
x, res['success'], sane
x, res, sane = eqsys.root(x, NumSys=NumSysLin, rref_equil=True, rref_preserv=True)
x, res['success'], sane
Explanation: We can create symbolic representations of these systems of equations:
End of explanation |
11,899 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Automatically 'discovering' the heat equation with reverse finite differencing
<a href="https
Step3: Okay, so now that we have our data to work on, we need to form a system of equations $KM=0$ to solve for the coefficients $M$
Step6: This method tells us that our data fits the equation
$$
u_t = 0u_x + 0.1u_{xx} + 0uu_x + 0,
$$
or
$$
u_t = \nu u_{xx},
$$
which is what we expected!
Would this method work with experimental data?
So far, we've been using analytical partial derivatives from the exact solution. However, imagine we had various temperature probes on a physical model of a heat conducting system, which were sampled in time. We can sample the analytical solution at specified points, but add some Gaussian noise to approximate a sensor in the real world. Can we still unveil the heat equation with this added noise? To compute the derivatives, we'll use finite differences, which would imply we may need to put multiple probes close to each other at a given location to resolve the spatial derivatives, but for now we will assume we can specify our spatial and temporal resolution. | Python Code:
import numpy as np
from numpy.testing import assert_almost_equal
# Specify diffusion coefficient
nu = 0.1
def analytical_soln(xmax=1.0, tmax=0.2, nx=1000, nt=1000):
Compute analytical solution.
x = np.linspace(0, xmax, num=nx)
t = np.linspace(0, tmax, num=nt)
u = np.zeros((len(t), len(x))) # rows are timesteps
for n, t_ind in enumerate(t):
u[n, :] = np.sin(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t_ind)
return u, x, t
u, x, t = analytical_soln()
# Create vectors for analytical partial derivatives
u_t = np.zeros(u.shape)
u_x = np.zeros(u.shape)
u_xx = np.zeros(u.shape)
for n in range(len(t)):
u_t[n, :] = -16*np.pi**2*nu*np.sin(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t[n])
u_x[n, :] = 4*np.pi*np.cos(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t[n])
u_xx[n, :] = -16*np.pi**2*np.sin(4*np.pi*x)*np.exp(-16*np.pi**2*nu*t[n])
# Compute the nonlinear convective term (that we know should have no effect)
uu_x = u*u_x
# Check to make sure some random point satisfies the PDE
i, j = 15, 21
assert_almost_equal(u_t[i, j] - nu*u_xx[i, j], 0.0)
Explanation: Automatically 'discovering' the heat equation with reverse finite differencing
<a href="https://zenodo.org/badge/latestdoi/72929661"><img src="https://zenodo.org/badge/72929661.svg" alt="DOI"></a>
<center>
P. Bachant | <a href="https://github.com/petebachant/reverse-fd-heat-eq">View on GitHub</a>
</center>
To date, most higher level physical knowledge has been derived from simpler first principles. However, these techniques have still not given a fully predictive understanding of turbulent flow. We have the exact deterministic governing equations (Navier--Stokes), but can't solve them for most useful cases. As such, it could be of interest to find a different quantity, other than momentum, for which we can find some conservation or transport equation, that will be feasible to solve, i.e., does not have the scale resolution requirements of a direct numerical simulation. Or, perhaps, we may want to follow one of the current turbulence modeling paradigms but uncover an equation to relate the unresolved to the resolved information, e.g., Reynolds stresses to mean velocity.
As a first step towards this goal, we want to see if the process of deriving physical laws or theories can be automated. For example, can we posit a generic homogeneous PDE, such as
$$
A \frac{\partial u}{\partial t} = B \frac{\partial u}{\partial x} + C \frac{\partial^2 u}{\partial x^2} ...,
$$
solve for the coefficients $[A, B, C,...]$, eliminate the terms for which coefficients are small, then be left with an analytically or computationally feasible equation?
For a first example, here we look at the 1-D heat or diffusion equation, for which an analytical solution can be obtained.
We will use two terms we know apply (the time derivative $u_t$ and second spatial derivative $u_xx$), and three that don't (first order linear transport $u_x$, nonlinear advection $uu_x$, and a constant offset $E$):
$$
A u_t = B u_x + C u_{xx} + D uu_x + E,
$$
with the initial condition
$$
u(x, 0) = \sin(4\pi x).
$$
We will use the analytical solution as our dataset on which we want to uncover the governing PDE:
$$
u(x,t) = \sin(4\pi x) e^{-16\pi^2 \nu t},
$$
from which our method should be able to determine that $A=1$, $C=\nu = 0.1$, and $B=D=E=0$.
First, we will use the analytical solution and analytical partial derivatives.
End of explanation
# Create K matrix from the input data using random indices
nterms = 5 # total number of terms in the equation
ni, nj = u.shape
K = np.zeros((5, 5))
# Pick data from different times and locations for each row
for n in range(nterms):
i = int(np.random.rand()*(ni - 1)) # time index
j = int(np.random.rand()*(nj - 1)) # space index
K[n, 0] = u_t[i, j]
K[n, 1] = -u_x[i, j]
K[n, 2] = -u_xx[i, j]
K[n, 3] = -uu_x[i, j]
K[n, 4] = -1.0
# We can't solve this matrix because it's singular, but we can try singular value decomposition
# I found this solution somewhere on Stack Overflow but can't find the URL now; sorry!
def null(A, eps=1e-15):
Find the null space of a matrix using singular value decomposition.
u, s, vh = np.linalg.svd(A)
null_space = np.compress(s <= eps, vh, axis=0)
return null_space.T
M = null(K, eps=1e-5)
coeffs = (M.T/M[0])[0]
for letter, coeff in zip("ABCDE", coeffs):
print(letter, "=", np.round(coeff, decimals=5))
Explanation: Okay, so now that we have our data to work on, we need to form a system of equations $KM=0$ to solve for the coefficients $M$:
$$
\left[
\begin{array}{ccccc}
{u_t}0 & -{u_x}_0 & -{u{xx}}0 & -{uu_x}_0 & -1 \
{u_t}_1 & -{u_x}_1 & -{u{xx}}1 & -{uu_x}_1 & -1 \
{u_t}_2 & -{u_x}_2 & -{u{xx}}2 & -{uu_x}_2 & -1 \
{u_t}_3 & -{u_x}_3 & -{u{xx}}3 & -{uu_x}_3 & -1 \
{u_t}_4 & -{u_x}_4 & -{u{xx}}_4 & -{uu_x}_4 & -1
\end{array}
\right]
\left[
\begin{array}{c}
A \
B \
C \
D \
E
\end{array}
\right] =
\left[
\begin{array}{c}
0 \
0 \
0 \
0 \
0
\end{array}
\right],
$$
for which each of the subscript indices $[0...4]$ corresponds to random points in space and time.
End of explanation
# Create a helper function compute derivatives with the finite difference method
def diff(dept_var, indept_var, index=None, n_deriv=1):
Compute the derivative of the dependent variable w.r.t. the independent at the
specified array index. Uses NumPy's `gradient` function, which uses second order
central differences if possible, and can use second order forward or backward
differences. Input values must be evenly spaced.
Parameters
----------
dept_var : array of floats
indept_var : array of floats
index : int
Index at which to return the numerical derivative
n_deriv : int
Order of the derivative (not the numerical scheme)
# Rename input variables
u = dept_var.copy()
x = indept_var.copy()
dx = x[1] - x[0]
for n in range(n_deriv):
dudx = np.gradient(u, dx, edge_order=2)
u = dudx.copy()
if index is not None:
return dudx[index]
else:
return dudx
# Test this with a sine
x = np.linspace(0, 6.28, num=1000)
u = np.sin(x)
dudx = diff(u, x)
d2udx2 = diff(u, x, n_deriv=2)
assert_almost_equal(dudx, np.cos(x), decimal=5)
assert_almost_equal(d2udx2, -u, decimal=2)
def detect_coeffs(noise_amplitude=0.0):
Detect coefficients from analytical solution.
u, x, t = analytical_soln(nx=500, nt=500)
# Add Gaussian noise to u
u += np.random.randn(*u.shape) * noise_amplitude
nterms = 5
ni, nj = u.shape
K = np.zeros((5, 5))
for n in range(nterms):
i = int(np.random.rand()*(ni - 1))
j = int(np.random.rand()*(nj - 1))
u_t = diff(u[:, j], t, index=i)
u_x = diff(u[i, :], x, index=j)
u_xx = diff(u[i, :], x, index=j, n_deriv=2)
uu_x = u[i, j] * u_x
K[n, 0] = u_t
K[n, 1] = -u_x
K[n, 2] = -u_xx
K[n, 3] = -uu_x
K[n, 4] = -1.0
M = null(K, eps=1e-3)
coeffs = (M.T/M[0])[0]
for letter, coeff in zip("ABCDE", coeffs):
print(letter, "=", np.round(coeff, decimals=3))
for noise_level in np.logspace(-10, -6, num=5):
print("Coefficients for noise amplitude:", noise_level)
try:
detect_coeffs(noise_amplitude=noise_level)
except ValueError:
print("FAILED")
print("")
Explanation: This method tells us that our data fits the equation
$$
u_t = 0u_x + 0.1u_{xx} + 0uu_x + 0,
$$
or
$$
u_t = \nu u_{xx},
$$
which is what we expected!
Would this method work with experimental data?
So far, we've been using analytical partial derivatives from the exact solution. However, imagine we had various temperature probes on a physical model of a heat conducting system, which were sampled in time. We can sample the analytical solution at specified points, but add some Gaussian noise to approximate a sensor in the real world. Can we still unveil the heat equation with this added noise? To compute the derivatives, we'll use finite differences, which would imply we may need to put multiple probes close to each other at a given location to resolve the spatial derivatives, but for now we will assume we can specify our spatial and temporal resolution.
End of explanation |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.