markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
We now create a different simulation from a snapshot in the SimulationArchive halfway through: | sim2, rebx = sa[500]
sim2.t | ipython_examples/SimulationArchive.ipynb | dtamayo/reboundx | gpl-3.0 |
We now integrate our loaded simulation to the same time as above (1.e6): | sim2.integrate(1.e6) | ipython_examples/SimulationArchive.ipynb | dtamayo/reboundx | gpl-3.0 |
and see that we obtain exactly the same particle positions in the original and reloaded simulations: | sim.status()
sim2.status() | ipython_examples/SimulationArchive.ipynb | dtamayo/reboundx | gpl-3.0 |
Using interact for animation with data
A soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:
$$
\phi(x,t) = \frac{1}{2} c \mathrm{sech}^2 \left[ \frac{\sqrt{c}}{2} \left(x - ct - a \right) \right]
$$
The constant c is the velocity and the constant a is the initial location of the soliton.
Define soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself. | def soliton(x, t, c, a):
"""Return phi(x, t) for a soliton wave with constants c and a."""
return 0.5*c*(1/(np.cosh((c**(1/2)/2)*(x-c*t-a))**2))
assert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5])) | assignments/assignment05/InteractEx03.ipynb | jegibbs/phys202-2015-work | mit |
Compute a 2d NumPy array called phi:
It should have a dtype of float.
It should have a shape of (xpoints, tpoints).
phi[i,j] should contain the value $\phi(x[i],t[j])$. | assert phi.shape==(xpoints, tpoints)
assert phi.ndim==2
assert phi.dtype==np.dtype(float)
assert phi[0,0]==soliton(x[0],t[0],c,a) | assignments/assignment05/InteractEx03.ipynb | jegibbs/phys202-2015-work | mit |
Write a plot_soliton_data(i) function that plots the soliton wave $\phi(x, t[i])$. Customize your plot to make it effective and beautiful. | def plot_soliton_data(i=0):
plot_soliton_data(0)
assert True # leave this for grading the plot_soliton_data function | assignments/assignment05/InteractEx03.ipynb | jegibbs/phys202-2015-work | mit |
variables
ATL-COM-PHYS-2017-079 (table 8, page 46)
|variable |type |n-tuple name |description |region >= 6j|region 5j|
|-------------------------------------------------|-------------------------|----------------------------------|---------------------------------------------------------------------------------------------------------------|--------------|-----------|
|${\Delta R^{\text{avg}}{bb}}$ |general kinematic |dRbb_avg_Sort4 |average ${\Delta R}$ for all ${b}$-tagged jet pairs |yes |yes |
|${\Delta R^{\text{max} p{T}}{bb}}$ |general kinematic |dRbb_MaxPt_Sort4 |${\Delta R}$ between the two ${b}$-tagged jets with the largest vector sum ${p{T}}$ |yes |- |
|${\Delta \eta^{\textrm{max}\Delta\eta}{jj}}$ |general kinematic |dEtajj_MaxdEta |maximum ${\Delta\eta}$ between any two jets |yes |yes |
|${m^{\text{min} \Delta R}{bb}}$ |general kinematic |Mbb_MindR_Sort4 |mass of the combination of the two ${b}$-tagged jets with the smallest ${\Delta R}$ |yes |- |
|${m^{\text{min} \Delta R}{jj}}$ |general kinematic |Mjj_MindR |mass of the combination of any two jets with the smallest ${\Delta R}$ |- |yes |
|${N^{\text{Higgs}}{30}}$ |general kinematic |nHiggsbb30_Sort4 |number of ${b}$-jet pairs with invariant mass within 30 GeV of the Higgs boson mass |yes |yes |
|${H^{\text{had}}{T}}$ |general kinematic |HT_jets? |scalar sum of jet ${p{T}}$ |- |yes |
|${\Delta R^{\text{min}\Delta R}{\text{lep}-bb}}$|general kinematic |dRlepbb_MindR_Sort4 |${\Delta R}$ between the lepton and the combination of the two ${b}$-tagged jets with the smallest ${\Delta R}$|- |yes |
|aplanarity |general kinematic |Aplanarity_jets |${1.5\lambda{2}}$, where ${\lambda_{2}}$ is the second eigenvalue of the momentum tensor built with all jets |yes |yes |
|${H1}$ |general kinematic |H1_all |second Fox-Wolfram moment computed using all jets and the lepton |yes |yes |
|BDT |reconstruction BDT output|TTHReco_best_TTHReco |BDT output |yes |yes |
|${m_{H}}$ |reconstruction BDT output|TTHReco_best_Higgs_mass |Higgs boson mass |yes |yes |
|${m_{H,b_{\text{lep top}}}}$ |reconstruction BDT output|TTHReco_best_Higgsbleptop_mass |Higgs boson mass and ${b}$-jet from leptonic ${t}$ |yes |- |
|${\Delta R_{\text{Higgs }bb}}$ |reconstruction BDT output|TTHReco_best_bbHiggs_dR |${\Delta R}$ between ${b}$-jets from Higgs boson |yes |yes |
|${\Delta R_{H,t\bar{t}}}$ |reconstruction BDT output|TTHReco_withH_best_Higgsttbar_dR|${\Delta R}$ between Higgs boson and ${t\bar{t}}$ system |yes |yes |
|${\Delta R_{H,\text{lep top}}}$ |reconstruction BDT output|TTHReco_best_Higgsleptop_dR |${\Delta R}$ between Higgs boson and leptonic ${t}$ |yes |- |
|${\Delta R_{H,b_{\text{had top}}}}$ |reconstruction BDT output|TTHReco_best_b1Higgsbhadtop_dR |${\Delta R}$ between Higgs boson and ${b}$-jet from hadronic ${t}$ |- |yes* |
|D |likelihood calculation |LHD_Discriminant |likelihood discriminant |yes |yes |
|${\text{MEM}{D1}}$ |matrix method | |matrix method |yes |- |
|${w^{H}{b}}$ |${b}$-tagging |? |sum of binned ${b}$-tagging weights of jets from best Higgs candidate |yes |yes |
|${B_{j^{3}}}$ |${b}$-tagging |? |third jet binned ${b}$-tagging weight (sorted by weight) |yes |yes |
|${B_{j^{4}}}$ |${b}$-tagging |? |fourth jet binned ${b}$-tagging weight (sorted by weight) |yes |yes |
|${B_{j^{5}}}$ |${b}$-tagging |? |fifth jet binned ${b}$-tagging weight (sorted by weight) |yes |yes | | variables = [
"nElectrons",
"nMuons",
"nJets",
"nBTags_70",
"dRbb_avg_Sort4",
"dRbb_MaxPt_Sort4",
"dEtajj_MaxdEta",
"Mbb_MindR_Sort4",
"Mjj_MindR",
"nHiggsbb30_Sort4",
"HT_jets",
"dRlepbb_MindR_Sort4",
"Aplanarity_jets",
"H1_all",
"TTHReco_best_TTHReco",
"TTHReco_best_Higgs_mass",
"TTHReco_best_Higgsbleptop_mass",
"TTHReco_best_bbHiggs_dR",
"TTHReco_withH_best_Higgsttbar_dR",
"TTHReco_best_Higgsleptop_dR",
"TTHReco_best_b1Higgsbhadtop_dR",
"LHD_Discriminant"
] | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
read | filenames_ttH = ["ttH_group.phys-higgs.11468583._000005.out.root"]
filenames_ttbb = ["ttbb_group.phys-higgs.11468624._000005.out.root"]
ttH = root_pandas.read_root(filenames_ttH, "nominal_Loose", columns = variables)
ttH["target"] = 1
ttH.head()
ttbb = root_pandas.read_root(filenames_ttbb, "nominal_Loose", columns = variables)
ttbb["target"] = 0
ttbb.head()
df = pd.concat([ttH, ttbb])
df.head() | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
characteristics | rows = []
for variable in df.columns.values:
rows.append({
"name": variable,
"maximum": df[variable].max(),
"minimum": df[variable].min(),
"mean": df[variable].mean(),
"median": df[variable].median(),
"std": df[variable].std()
})
_df = pd.DataFrame(rows)[["name", "maximum", "minimum", "mean", "std", "median"]]
_df | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
imputation | df["TTHReco_best_TTHReco"].replace( -9, -1, inplace = True)
df["TTHReco_best_Higgs_mass"].replace( -9, -1, inplace = True)
df["TTHReco_best_Higgsbleptop_mass"].replace( -9, -1, inplace = True)
df["TTHReco_best_bbHiggs_dR"].replace( -9, -1, inplace = True)
df["TTHReco_withH_best_Higgsttbar_dR"].replace(-9, -1, inplace = True)
df["TTHReco_best_Higgsleptop_dR"].replace( -9, -1, inplace = True)
df["TTHReco_best_b1Higgsbhadtop_dR"].replace( -9, -1, inplace = True)
df["LHD_Discriminant"].replace( -9, -1, inplace = True) | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
histograms | plt.rcParams["figure.figsize"] = (17, 14)
df.hist(); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
correlations
correlations ${t\bar{t}H}$ | sns.heatmap(df.query("target == 1").drop("target", axis = 1).corr()); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
correlations ${t\bar{t}b\bar{b}}$ | sns.heatmap(df.query("target == 0").drop("target", axis = 1).corr()); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
ratio of correlations of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ | _df = df.query("target == 1").drop("target", axis = 1).corr() / df.query("target == 0").drop("target", axis = 1).corr()
sns.heatmap(_df); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
## clustered correlations | plot = sns.clustermap(df.corr())
plt.setp(plot.ax_heatmap.get_yticklabels(), rotation = 0); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
strongest correlations and anticorrelations for discrimination of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ | df.corr()["target"].sort_values(ascending = False).to_frame()[1:] | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
strongest absolute correlations for discrimination of ${t\bar{t}H}$ and ${t\bar{t}b\bar{b}}$ | df.corr()["target"].abs().sort_values(ascending = False).to_frame()[1:]
_df = df.corr()["target"].abs().sort_values(ascending = False).to_frame()[1:]
_df.plot(kind = "barh", legend = "False"); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
clustered correlations of 10 strongest absolute correlations | names = df.corr()["target"].abs().sort_values(ascending = False)[1:11].index.values
plot = sns.clustermap(df[names].corr())
plt.setp(plot.ax_heatmap.get_yticklabels(), rotation = 0); | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
rescale | variables_rescale = [variable for variable in list(df.columns) if variable != "target"]
scaler = MinMaxScaler()
df[variables_rescale] = scaler.fit_transform(df[variables_rescale])
df.head() | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
save | df.to_csv("ttHbb_data.csv", index = False) | ttHbb_variables_preparation.ipynb | wdbm/abstraction | gpl-3.0 |
Start by finding structures using online databases (or cached local results). This uses an InChI for ethane to seed the molecule collection. | mol = oc.find_structure('InChI=1S/C2H6/c1-2/h1-2H3')
mol.structure.show() | girder/notebooks/notebooks/notebooks/NWChem.ipynb | OpenChemistry/mongochemserver | bsd-3-clause |
Set up the calculation, by specifying the name of the Docker image that will be used, and by providing input parameters that are known to the specific image | image_name = 'openchemistry/nwchem:6.6'
input_parameters = {
'theory': 'dft',
'functional': 'b3lyp',
'basis': '6-31g'
} | girder/notebooks/notebooks/notebooks/NWChem.ipynb | OpenChemistry/mongochemserver | bsd-3-clause |
Geometry Optimization Calculation
The mol.optimize() method is a specialized helper function that adds 'task': 'optimize' to the input_parameters dictionary,
and then calls the generic mol.calculate() method internally. | result = mol.optimize(image_name, input_parameters)
result.orbitals.show(mo='lumo', iso=0.005) | girder/notebooks/notebooks/notebooks/NWChem.ipynb | OpenChemistry/mongochemserver | bsd-3-clause |
Single Point Energy Calculation
The mol.energy() method is a specialized helper function that adds 'task': 'energy' to the input_parameters dictionary,
and then calls the generic mol.calculate() method internally. | result = mol.energy(image_name, input_parameters)
result.orbitals.show(mo='homo', iso=0.005) | girder/notebooks/notebooks/notebooks/NWChem.ipynb | OpenChemistry/mongochemserver | bsd-3-clause |
Lecture 11. Going fast: the Barnes-Hut algorithm
Previous lecture
Discretization of the integral equations, Galerkin methods
Computation of singular integrals
Idea of the Barnes-Hut method
Todays lecture
Barnes-Hut in details
The road to the FMM
Algebraic versions of the FMM/Fast Multipole
The discretization of the integral equation leads to dense matrices.
The main question is how to compute the matrix-by-vector product,
i.e. the summation of the form:
$$\sum_{j=1}^M A_{ij} q_j = V_i, \quad i = 1, \ldots, N.$$
The matrix $A$ is dense, i.e. its element can not be omitted. The complexity is $\mathcal{O}(N^2)$.
Can we make it faster?
The simplest case is the computation of the potentials from the system of charges
$$V_i = \sum_{j} \frac{q_j}{\Vert r_i - r_j \Vert}$$
This summation appears in:
Modelling of large systems of charges
Astronomy (where instead of $q_j$ we have masses, i.e. start..)
It is called <font color='red'> the N-body problem </font>.
There is no problem with memory, since you only have two cycles. | import numpy as np
import math
from numba import jit
N = 10000
x = np.random.randn(N, 2);
y = np.random.randn(N, 2);
charges = np.ones(N)
res = np.zeros(N)
@jit
def compute_nbody_direct(N, x, y, charges, res):
for i in xrange(N):
res[i] = 0.0
for j in xrange(N):
dist = (x[i, 0] - y[i, 0]) ** 2 + (x[i, 1] - y[i, 1]) ** 2
dist = math.sqrt(dist)
res[i] += charges[j] / dist
%timeit compute_nbody_direct(N, x, y, charges, res)
| lecture-11.ipynb | oseledets/fastpde | cc0-1.0 |
Question
What is the typical size of particle system?
Millenium run
One of the most famous N-body computations is the Millenium run
More than 10 billions particles ($2000^3$)
$>$ 1 month of computations, 25 Terabytes of storage
Each "particle" represents approximately a billion solar masses of dark matter
Study, how the matter is distributed through the Universy (cosmology) | from IPython.display import YouTubeVideo
YouTubeVideo('UC5pDPY5Nz4') | lecture-11.ipynb | oseledets/fastpde | cc0-1.0 |
Smoothed particle hydrodynamics
The particle systems can be used to model a lot of things.
For nice examples, see the website of Ron Fedkiw | from IPython.display import YouTubeVideo
YouTubeVideo('6bdIHFTfTdU') | lecture-11.ipynb | oseledets/fastpde | cc0-1.0 |
Applications
Where the N-body problem arises in different problems with long-range interactions`
- Cosmology (interacting masses)
- Electrostatics (interacting charges)
- Molecular dynamics (more complicated interactions, maybe even 3-4 body terms).
- Particle modelling (smoothed particle hydrodynamics)
Fast computation
$$
V_i = \sum_{j} \frac{q_j}{\Vert x_i - y_j \Vert}
$$
Direct computation takes $\mathcal{O}(N^2)$ operations.
How to compute it fast?
The core idea: Barnes, Hut (Nature, 1986)
Use clustering of particles!
Idea on one slide
The idea was simple:
If a charge is far from a cluster of sources, it they are seen as one big "particle".
<img src="earth-andromeda.jpeg" width = 70%>
Barnes-Hut
$$\sum_j q_j F(x, y_j) \approx Q F(x, y_C)$$
$$Q = \sum_j q_j, \quad y_C = \frac{1}{J} \sum_{j} y_j$$
To compute the interaction, it is sufficient to replace by the a center-of-mass and a total mass!
The idea of Barnes and Hut was to split the <font color='red'> sources </font> into big blocks using the <font color='red'> cluster tree </font>
<img width=90% src='clustertree.png'>
The algorithm is recursive.
Let $\mathcal{T}$ be the tree, and $x$ is the point where we need to
compute the potential.
Set $N$ to the <font color='red'> root node </font>
If $x$ and $N$ <font color='red'> are separated </font> , then set $V(x) = Q V(y_{\mathrm{center}})$
If $x$ and $N$ are not separated, compute $V(x) = \sum_{C \in
\mathrm{sons}(N)} V(C, x)$ <font color='red'> recursion </font>
The complexity is $\mathcal{O}(\log N)$ for 1 point!
Trees
There are many options for the tree construction.
Quadtree/Octree
KD-tree
Recursive intertial bisection
Octtree
The simplest one: **quadtree/ octtree, when you split the square into 4 squares and do that until the number of points is less that a parameter.
It leads to the unbalanced tree, adding points is simple (but can unbalance it more).
KD-tree
Another popular choice of the tree is the KD-tree
The construction is simple as well:
Split along x-axis, then y-axis in such a way that the tree is balanced (i.e. the number of points in the left child/right child is similar).
The tree is always balanced, but biased towards the coordinate axis.
Recursive inertial bisection
Compute the center-of-mass and select a hyperplane such that sum of squares of distances to it is minimal.
$$\sum_{j} \rho^2(x_j, \Pi) \rightarrow \min.$$
Often gives best complexity, but adding/removing points can be difficult.
The scheme
You can actually code it from this description!
Construct the cluster tree
Fill the tree with charges
For any point we now can compute the potential in $\mathcal{O}(\log N)$ flops (instead of $\mathcal{O}(N)$).
Notes on the complexity
For each node of the tree, we need to compute its total mass and the center-of-mass. If we do it in a cycle, then the complexity will be $\mathcal{O}(N^2)$ for the tree constuction.
However, it is easy to construct it in a smarter way.
Start from the children (which contain only few particles) and fill then
Bottom-to-top graph traversal: if we know the charges for the children, we can cheaply compute the total charge/center of massfor the father
Now you can actually code this (minor things remaining are the bounding box and separation criteria).
Problems with Barnes-Hut
What are the problems with Barnes-Hut?
Well, several
- The logarithmic term
- Low accuracy $\varepsilon = 10^{-2}$ is ok, but if we want $\varepsilon=10^{-5}$
we have to take larger <font color='red'> separation criteria </font>
Solving problems with Barnes-Hut
Complexity: To avoid the logarithmic term, we need to store two trees, for the sources, and for receivers
Accuracy: instead of the <font color='red'> piecewise-constant approximation </font> which is inheritant in the BH algorithm, use more accurate representations.
Double tree Barnes-Hut
Principal scheme of the Double-tree BH:
Construct two trees for sources & receivers
Fill the tree for sources with charges (bottom-to-top)
Compute the interaction between nodes of the treess
Fill the tree for receivers with potentials (top-to-bottom)
The original BH method has low accuracy, and is based on the expansion
$$f(x, y) \approx f(x_0, y_0)$$
What to do?
Answer: Use higher-order expansions!
$$f(x + \delta x, y + \delta y) \approx f(x, y) + \sum_{k, l=0}^p
(D^{k} D^{l} f) \delta ^k
\delta y^l \frac{1}{k!} \frac{1}{l!} + \ldots
$$
For the Coloumb interaction $\frac{1}{r}$ we have the multipole expansion
$$
v(R) = \frac{Q}{R} + \frac{1}{R^3} \sum_{\alpha} P_{\alpha} R_{\alpha} + \frac{1}{6R^5} \sum_{\alpha, \beta} Q_{\alpha \beta} (3R_{\alpha \beta} - \delta_{\alpha \beta}R^2) + \ldots,
$$
where $P_{\alpha}$ is the dipole moment, $Q_{\alpha \beta}$ is the quadrupole moment (by actually, nothing more than the Taylor series expansion).
Fast multipole method
This combination is very powerful, and
<font color='red' size=6.0> Double tree + multipole expansion $\approx$ the Fast Multipole Method (FMM). </font>
FMM
We will talk about the exact implementation and the complexity issues in the next lecture.
Problems with FMM
FMM has problems:
- It relies on analytic expansions; maybe difficult to obtain for the integral equations
- the higher is the order of the expansion, the larger is the complexity.
- That is why the algebraic interpretation (or kernel-independent FMM) is of great importance.
FMM hardware
For cosmology this problem is so important, so that they have released a special hardware Gravity Pipe for solving the N-body problem
FMM software
Sidenote, When you Google for "FMM", you will also encounter fast marching method (even in the scikit).
Everyone uses its own in-house software, so a good Python open-source software is yet to be written.
This is also a perfect test for the GPU programming (you can try to take such project in the App Period, by the way).
Overview of todays lecture
The cluster tree
Barnes-Hut and its problems
Double tree / fast multipole method
Important difference: element evaluation is fast. In integral equations, it is slow.
Next lecture
More detailed overview of the FMM algorithm, along with complexity estimates.
Algebraic interpretation of the FMM
Application of the FMM to the solution of integral equations | from IPython.core.display import HTML
def css_styling():
styles = open("./styles/alex.css", "r").read()
return HTML(styles)
css_styling() | lecture-11.ipynb | oseledets/fastpde | cc0-1.0 |
I used 3 as the generator for this field. For a field defined with the polynomial x^8 + x^4 + x^3 + x + 1, there may be other generators (I can't remember) | generator = ff.GF2int(3)
generator | Generating the exponent and log tables.ipynb | lrq3000/unireedsolomon | mit |
We can enumerate the entire field by repeatedly multiplying by the generator. (The first element is 1 because generator^0 is 1). This becomes our exponent table. | exptable = [ff.GF2int(1), generator]
for _ in range(254): # minus 2 because the first 2 elements are hardcoded
exptable.append(exptable[-1].multiply(generator))
# Turn back to ints for a more compact print representation
print([int(x) for x in exptable]) | Generating the exponent and log tables.ipynb | lrq3000/unireedsolomon | mit |
That's now our exponent table. We can look up the nth element in this list to get generator^n | exptable[5] == generator**5
all(exptable[n] == generator**n for n in range(256))
[int(x) for x in exptable] == [int(x) for x in ff.GF2int_exptable] | Generating the exponent and log tables.ipynb | lrq3000/unireedsolomon | mit |
The log table is the inverse function of the exponent table | logtable = [None for _ in range(256)]
# Ignore the last element of the field because fields wrap back around.
# The log of 1 could be 0 (g^0=1) or it could be 255 (g^255=1)
for i, x in enumerate(exptable[:-1]):
logtable[x] = i
print([int(x) if x is not None else None for x in logtable])
[int(x) if x is not None else None for x in logtable] == [int(x) if x >= 0 else None for x in ff.GF2int_logtable] | Generating the exponent and log tables.ipynb | lrq3000/unireedsolomon | mit |
Using the Metatab Package
The second way to access a package is to use the metatab package. This method requires installing the metatab python package, but has some important advantages: it gives you direct access to package and dataset documentation. You can load any type of metatab package with the open_package() function, but for the highest performance, you should use the CSV package. Opening CSV package loads only the metadata and the resources you need, while using a ZIP or Excel packackage requires downloading the entire package first.
To find the CSV package in a package that is publiched to a CKAN repository, look for a CSV file with the description of "CSV Package Metadata in Metatab format". For the ADOD package, this file is named sandiegocounty.gov-adod-2012-sra-3.csv.
Opening the package returns a Metatab document object. If you display it in Jupyter, the output cell will display the package documentation. | import metatab
doc = metatab.open_package('http://s3.amazonaws.com/library.metatab.org/sandiegocounty.gov-adod-2012-sra-3.csv')
doc | users/eric/Metatab Package Example.ipynb | sandiegodata/age-friendly-communities | mit |
The .resource() method will return one of the resoruces. Displaying it shows the resoruce documentation. | r = doc.resource('adod-prevalence')
r | users/eric/Metatab Package Example.ipynb | sandiegodata/age-friendly-communities | mit |
Once you have a resource, use the .dataframe() method to get a Pandas dataframe. | df = r.dataframe()
df.head() | users/eric/Metatab Package Example.ipynb | sandiegodata/age-friendly-communities | mit |
24 Single-Source Shortest Paths
In a shortest-paths problem,
Given: a weighted, directed graph $G = (V, E)$, with weight function $w : E \to \mathcal{R}$.
path $p = < v_0, v_1, \dotsc, v_k >$, so
$$w(p) = \displaystyle \sum_{i=1}^k w(v_{i-1}, v_i)$$.
we define the shortest-path weight $\delta(u, v)$ from $u$ to $v$ by
\begin{equation}
\delta(u, v) = \begin{cases}
\min{ w(p) : u \overset{p}{\to} v } & \text{if $p$ exist}\
\infty & \text{otherwise}
\end{cases}
\end{equation}
variants:
+ Single-destination shortest-paths problem
+ Single-pair shortest-path problem
+ All-pairs shortest-path problem
Optimal substructure of a shortest path:
subpath of $p$ is a shortest path between two of its internal nodes if $p$ is a shortest path.
Negative-weight edges: whether a negative weight cycle exists?
Cycles:
+ negative-weight cycle: detect and remove
+ positive-weight cycle: auto remove
+ 0-weight cycle: detect and remove
Representing shortest paths:
a "shortest-paths tree": a rooted tree containing a shortest path from the source $s$ to every vertex that is reachable from $s$.
Relaxation:
modify the node's upper bound if detect a shorter path.
```
INITIALIZE-SINGLE-SOURCE(G, s)
for each vertex v in G.V
v.d = infty
v.pi = NIL
s.d = 0
RELAX(u, v, w)
if v.d > u.d + w(u, v)
v.d = u.d + w(u, v)
v.pi = u
```
Properties of shortest paths and relaxtion
+ Triangle inequality
+ Upper-bound property
+ No-path property
+ Convergence property
+ Path-relaxation property
+ Predecessor-subgraph property
24.1 The Bellman-Ford algorithm
it returns a boolean valued indicating whether or not there is a negative-weight cycle that is reachable from the source.
The algorithm relaxes edges, progressively decreasing an estimate $v.d$ on the weight of a shortest path from the source $s$ to each vertex $v \in V$ until it achieves the actual shortest-path weight $\delta(s, v)$.
The Bellman-Fold algorithm runs in time $O(VE)$. | plt.imshow(plt.imread('./res/bellman_ford.png'))
plt.imshow(plt.imread('./res/fig24_4.png')) | Introduction_to_Algorithms/24_Single-Source_Shortest_Paths/note.ipynb | facaiy/book_notes | cc0-1.0 |
24.2 Single-source shortest paths in directed acyclic graphs
By relaxing the edges of a weighted dag (directed acyclic graph) $G = (V, E)$ according to a topological sort of its vertices, we can compute shortest paths from a single source in $O(V + E)$ time. | plt.imshow(plt.imread('./res/dag.png'))
plt.imshow(plt.imread('./res/fig24_5.png')) | Introduction_to_Algorithms/24_Single-Source_Shortest_Paths/note.ipynb | facaiy/book_notes | cc0-1.0 |
interesting application: to determine critical paths in PERT chart analysis.
24.3 Dijkstra's algorithm (greedy strategy)
weighted, directed graph $G = (V, E)$ and all $w(u, v) \geq 0$.
Dijkstra's algorithm maintains a set $S$ of vertices whose final shortest-path weights from the source $s$ have already been determined.
we use a min-priority queue $Q$ of vertices, keyed by their $d$ values. | plt.imshow(plt.imread('./res/dijkstra.png'))
plt.imshow(plt.imread('./res/fig24_6.png')) | Introduction_to_Algorithms/24_Single-Source_Shortest_Paths/note.ipynb | facaiy/book_notes | cc0-1.0 |
24.4 Difference constraints and shortest paths
Linear programming
Systems of difference constraints
In a system of difference constraints, each row of the linear-programming matrix $A$ contains one 1 and one -1, and all other entries of $A$ are 0. $\to$ the form $x_j - x_i \leq b_k$. | plt.imshow(plt.imread('./res/inequ.png')) | Introduction_to_Algorithms/24_Single-Source_Shortest_Paths/note.ipynb | facaiy/book_notes | cc0-1.0 |
Constraint graphs | plt.imshow(plt.imread('./res/fig24_8.png')) | Introduction_to_Algorithms/24_Single-Source_Shortest_Paths/note.ipynb | facaiy/book_notes | cc0-1.0 |
Compute statistic
To use an algorithm optimized for spatio-temporal clustering, we
just pass the spatial adjacency matrix (instead of spatio-temporal) | print('Computing adjacency.')
adjacency = spatial_src_adjacency(src)
# Note that X needs to be a list of multi-dimensional array of shape
# samples (subjects_k) × time × space, so we permute dimensions
X1 = np.transpose(X1, [2, 1, 0])
X2 = np.transpose(X2, [2, 1, 0])
X = [X1, X2]
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation,
# and use a very low number of permutations for the same reason.
n_permutations = 50
p_threshold = 0.001
f_threshold = stats.distributions.f.ppf(1. - p_threshold / 2.,
n_subjects1 - 1, n_subjects2 - 1)
print('Clustering.')
F_obs, clusters, cluster_p_values, H0 = clu =\
spatio_temporal_cluster_test(
X, adjacency=adjacency, n_jobs=None, n_permutations=n_permutations,
threshold=f_threshold, buffer_size=None)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0] | dev/_downloads/cfbef36033f8d33f28c4fe2cfa35314a/30_cluster_ftest_spatiotemporal.ipynb | mne-tools/mne-tools.github.io | bsd-3-clause |
Log posterior function
The log posterior function is the workhorse of the analysis. I implement it as a class that stores the observation data and the priors, contains the methods to calculate the model and evaluate the log posterior probability density, and encapsulates the optimisation and MCMC sampling routines. | class LPFunction:
def __init__(self, name: str, times: ndarray = None, fluxes: ndarray = None):
self.tm = QuadraticModel(klims=(0.05, 0.25), nk=512, nz=512)
# LPF name
# --------
self.name = name
# Declare high-level objects
# --------------------------
self.ps = None # Parametrisation
self.de = None # Differential evolution optimiser
self.sampler = None # MCMC sampler
# Initialize data
# ---------------
self.times = asarray(times)
self.fluxes = asarray(fluxes)
self.tm.set_data(self.times)
# Define the parametrisation
# --------------------------
self.ps = ParameterSet([
GParameter('tc', 'zero_epoch', 'd', NP(0.0, 0.1), (-inf, inf)),
GParameter('pr', 'period', 'd', NP(1.0, 1e-5), (0, inf)),
GParameter('rho', 'stellar_density', 'g/cm^3', UP(0.1, 25.0), (0, inf)),
GParameter('b', 'impact_parameter', 'R_s', UP(0.0, 1.0), (0, 1)),
GParameter('k2', 'area_ratio', 'A_s', UP(0.05**2, 0.25**2), (0.05**2, 0.25**2)),
GParameter('q1', 'q1_coefficient', '', UP(0, 1), bounds=(0, 1)),
GParameter('q2', 'q2_coefficient', '', UP(0, 1), bounds=(0, 1)),
GParameter('loge', 'log10_error', '', UP(-4, 0), bounds=(-4, 0))])
self.ps.freeze()
def create_pv_population(self, npop=50):
return self.ps.sample_from_prior(npop)
def baseline(self, pv):
"""Multiplicative baseline"""
return 1.
def transit_model(self, pv, copy=True):
pv = atleast_2d(pv)
# Map from sampling parametrisation to the transit model parametrisation
# ----------------------------------------------------------------------
k = sqrt(pv[:, 4]) # Radius ratio
tc = pv[:, 0] # Zero epoch
p = pv[:, 1] # Orbital period
sa = as_from_rhop(pv[:, 2], p) # Scaled semi-major axis
i = i_from_ba(pv[:, 3], sa) # Orbital inclination
# Map the limb darkening
# ----------------------
ldc = zeros((pv.shape[0],2))
a, b = sqrt(pv[:,5]), 2.*pv[:,6]
ldc[:,0] = a * b
ldc[:,1] = a * (1. - b)
return squeeze(self.tm.evaluate(k, ldc, tc, p, sa, i))
def flux_model(self, pv):
return self.transit_model(pv) * self.baseline(pv)
def residuals(self, pv):
return self.fluxes - self.flux_model(pv)
def set_prior(self, pid: int, prior) -> None:
self.ps[pid].prior = prior
def lnprior(self, pv):
return self.ps.lnprior(pv)
def lnlikelihood(self, pv):
flux_m = self.flux_model(pv)
wn = 10**(atleast_2d(pv)[:, 7])
return lnlike_normal_v(self.fluxes, flux_m, wn)
def lnposterior(self, pv):
lnp = self.lnprior(pv) + self.lnlikelihood(pv)
return where(isfinite(lnp), lnp, -inf)
def __call__(self, pv):
return self.lnposterior(pv)
def optimize(self, niter=200, npop=50, population=None, label='Global optimisation', leave=False):
if self.de is None:
self.de = DiffEvol(self.lnposterior, clip(self.ps.bounds, -1, 1), npop, maximize=True, vectorize=True)
if population is None:
self.de._population[:, :] = self.create_pv_population(npop)
else:
self.de._population[:,:] = population
for _ in tqdm(self.de(niter), total=niter, desc=label, leave=leave):
pass
def sample(self, niter=500, thin=5, label='MCMC sampling', reset=True, leave=True):
if self.sampler is None:
self.sampler = EnsembleSampler(self.de.n_pop, self.de.n_par, self.lnposterior, vectorize=True)
pop0 = self.de.population
else:
pop0 = self.sampler.chain[:,-1,:].copy()
if reset:
self.sampler.reset()
for _ in tqdm(self.sampler.sample(pop0, iterations=niter, thin=thin), total=niter, desc=label, leave=False):
pass
def posterior_samples(self, burn: int=0, thin: int=1):
fc = self.sampler.chain[:, burn::thin, :].reshape([-1, self.de.n_par])
return pd.DataFrame(fc, columns=self.ps.names)
def plot_mcmc_chains(self, pid: int=0, alpha: float=0.1, thin: int=1, ax=None):
fig, ax = (None, ax) if ax is not None else subplots()
ax.plot(self.sampler.chain[:, ::thin, pid].T, 'k', alpha=alpha)
fig.tight_layout()
return fig
def plot_light_curve(self, model: str = 'de', figsize: tuple = (13, 4)):
fig, ax = subplots(figsize=figsize, constrained_layout=True)
cp = sb.color_palette()
if model == 'de':
pv = self.de.minimum_location
err = 10**pv[7]
elif model == 'mc':
fc = array(self.posterior_samples())
pv = permutation(fc)[:300]
err = 10**median(pv[:, 7], 0)
ax.errorbar(self.times, self.fluxes, err, fmt='.', c=cp[4], alpha=0.75)
if model == 'de':
ax.plot(self.times, self.flux_model(pv), c=cp[0])
if model == 'mc':
flux_pr = self.flux_model(fc[permutation(fc.shape[0])[:1000]])
flux_pc = array(percentile(flux_pr, [50, 0.15,99.85, 2.5,97.5, 16,84], 0))
[ax.fill_between(self.times, *flux_pc[i:i+2,:], alpha=0.2,facecolor=cp[0]) for i in range(1,6,2)]
ax.plot(self.times, flux_pc[0], c=cp[0])
setp(ax, xlim=self.times[[0,-1]], xlabel='Time', ylabel='Normalised flux')
return fig, axs
| notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
Priors
The priors are contained in a ParameterSet object from pytransit.param.parameter. ParameterSet is a utility class containing a function for calculating the joint prior, etc. We're using only two basic priors: a normal prior NP, for which $x \sim N(\mu,\sigma)$, a uniform prior UP, for which $x \sim U(a,b)$.
We could use an informative prior on the planet-star area ratio (squared radius ratio) that we base on the observed NIR transit depth (see below). This is justified since the limb darkening, which affects the observed transit depth, is sufficiently weak in NIR. We would either need to use significantly wider informative prior, or an uninformative one, if we didn't have NIR data.
Model
The model has two components: a multiplicative constant baseline, and a transit shape modelled using the quadratic Mandel & Agol transit model implemented in PyTransit. The sampling parameterisation is different than the parameterisation used by the transit model, so we need to map the parameters from the sampling space to the model space. Also, we're keeping things simple and assuming a circular orbit. Eccentric orbits will be considered in later tutorials.
Limb darkening
The limb darkening uses the parameterisation by Kipping (2013, MNRAS, 435(3), 2152–2160), where the quadratic limb darkening coefficients $u$ and $v$ are mapped from sampling parameters $q_1$ and $q_2$ as
$$
u = 2\sqrt{q_1}q_2,
$$
$$
v = \sqrt{q_1}(1-2q_2).
$$
This parameterisation allows us to use uniform priors from 0 to 1 to cover the whole physically sensible $(u,v)$-space.
Log likelihood
The log likelihood calculation is carried out by the ll_normal_es function that evaluates the normal log likelihood given a single error value.
Read in the data
First we need to read in the (mock) observation data stored in obs_data.fits. The data corresponds to a single transit observed simultaneously in eight passbands (filters). The photometry is saved in extension 1 as a binary table, and we want to read the mid-exposure times and flux values corresponding to different passbands. The time is stored in the time column, and fluxes are stored in the f_wn_* columns, where * is the filter name. | dfile = Path('data').joinpath('obs_data.fits')
data = pf.getdata(dfile, ext=1)
flux_keys = [n for n in data.names if 'f_wn' in n]
filter_names = [k.split('_')[-1] for k in flux_keys]
time = data['time'].astype('d')
fluxes = [data[k].astype('d') for k in flux_keys]
print ('Filter names: ' + ', '.join(filter_names)) | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
First, let's have a quick look at our data, and plot the blue- and redmost passbands. | with sb.axes_style('white'):
fig, axs = subplots(1,2, figsize=(13,5), sharey=True)
axs[0].plot(time,fluxes[0], drawstyle='steps-mid', c=cp[0])
axs[1].plot(time,fluxes[-1], drawstyle='steps-mid', c=cp[2])
setp(axs, xlim=time[[0,-1]])
fig.tight_layout() | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
Here we see what we'd expect to see. The stronger limb darkening in blue makes the bluemost transit round, while we can spot the end of ingress and the beginning of egress directly by eye from the redmost light curve. Also, the transit is deeper in u' than in Ks, which tells that the impact parameter b is smallish (the transit would be deeper in red than in blue for large b).
Parameter estimation
First, we create an instance of the log posterior function with the redmost light curve data.
Next, we run the DE optimiser for de_iter iterations to clump the parameter vector population close to the global posterior maximum, use the DE population to initialise the emcee sampler, and run the sampler for mc_iter iterations to obtain a posterior sample. | npop, de_iter, mc_reps, mc_iter, thin = 100, 200, 3, 500, 10
lpf = LPFunction('Ks', time, fluxes[-1])
lpf.optimize(de_iter, npop)
lpf.plot_light_curve();
for i in range(mc_reps):
lpf.sample(mc_iter, thin=thin, reset=True, label='MCMC sampling')
lpf.plot_light_curve('mc'); | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
Analysis: overview
The MCMC chains are now stored in lpf.sampler.chain. Let's first have a look into how the chain populations evolved to see if we have any problems with our setup, whether we have converged to sample the true posterior distribution, and, if so, what was the burn-in time. | with sb.axes_style('white'):
fig, axs = subplots(2,4, figsize=(13,5), sharex=True)
ls, lc = ['-','--','--'], ['k', '0.5', '0.5']
percs = [percentile(lpf.sampler.chain[:,:,i], [50,16,84], 0) for i in range(8)]
[axs.flat[i].plot(lpf.sampler.chain[:,:,i].T, 'k', alpha=0.01) for i in range(8)]
[[axs.flat[i].plot(percs[i][j], c=lc[j], ls=ls[j]) for j in range(3)] for i in range(8)]
setp(axs, yticks=[], xlim=[0,mc_iter//10])
fig.tight_layout() | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
Ok, everything looks good. The 16th, 50th and 84th percentiles of the parameter vector population are stable and don't show any significant long-term trends. Now we can flatten the individual chains into one long chain fc and calculate the median parameter vector. | fc = lpf.sampler.chain.reshape([-1,lpf.sampler.chain.shape[-1]])
mp = median(fc, 0) | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
Let's also plot the model and the data to see if this all makes sense. To do this, we calculate the conditional distribution of flux using the posterior samples (here, we're using a random subset of samples, although this isn't really necessary), and plot the distribution median and it's median-centred 68%, 95%, and 99.7% central posterior intervals (corresponding approximately to 1, 2, and 3$\sigma$ intervals if the distribution is normal). | flux_pr = lpf.flux_model(fc[permutation(fc.shape[0])[:1000]])
flux_pc = array(percentile(flux_pr, [50, 0.15,99.85, 2.5,97.5, 16,84], 0))
with sb.axes_style('white'):
zx1,zx2,zy1,zy2 = 0.958,0.98, 0.9892, 0.992
fig, ax = subplots(1,1, figsize=(13,4))
cp = sb.color_palette()
ax.errorbar(lpf.times, lpf.fluxes, 10**mp[7], fmt='.', c=cp[4], alpha=0.75)
[ax.fill_between(lpf.times,*flux_pc[i:i+2,:],alpha=0.2,facecolor=cp[0]) for i in range(1,6,2)]
ax.plot(lpf.times, flux_pc[0], c=cp[0])
setp(ax, xlim=lpf.times[[0,-1]], xlabel='Time', ylabel='Normalised flux')
fig.tight_layout()
az = fig.add_axes([0.075,0.18,0.20,0.46])
ax.add_patch(Rectangle((zx1,zy1),zx2-zx1,zy2-zy1,fill=False,edgecolor='k',lw=1,ls='dashed'))
[az.fill_between(lpf.times,*flux_pc[i:i+2,:],alpha=0.2,facecolor=cp[0]) for i in range(1,6,2)]
setp(az, xlim=(zx1,zx2), ylim=(zy1,zy2), yticks=[], xticks=[])
az.plot(lpf.times, flux_pc[0], c=cp[0]) | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
We could (should) also plot the residuals, but I've left them out from the plot for clarity. The plot looks fine, and we can continue to have a look at the parameter estimates.
Analysis
We start the analysis by making a Pandas data frame df, using the df.describe to gen an overview of the estimates, and plotting the posteriors for the most interesting parameters as violin plots. | pd.set_option('display.precision',4)
df = pd.DataFrame(data=fc.copy(), columns=lpf.ps.names)
df['k'] = sqrt(df.k2)
df['u'] = 2*sqrt(df.q1)*df.q2
df['v'] = sqrt(df.q1)*(1-2*df.q2)
df = df.drop('k2', axis=1)
df.describe()
with sb.axes_style('white'):
fig, axs = subplots(2,3, figsize=(13,5))
pars = 'tc rho b k u v'.split()
[sb.violinplot(y=df[p], inner='quartile', ax=axs.flat[i]) for i,p in enumerate(pars)]
[axs.flat[i].text(0.05,0.9, p, transform=axs.flat[i].transAxes) for i,p in enumerate(pars)]
setp(axs, xticks=[], ylabel='')
fig.tight_layout() | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
While we're at it, let's plot some correlation plots. The limb darkening coefficients are correlated, and we'd also expect to see a correlation between the impact parameter and radius ratio. | corner(df[['k', 'rho', 'b', 'q1', 'q2']]); | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
Calculating the parameter estimates for all the filters
Ok, now, let's do the parameter estimation for all the filters. We wouldn't be doing separate per-filter parameter estimation in real life, since it's much better use of the data to do a simultaneous joint modelling of all the data together (this is something that will be shown in a later tutorial). This will take some time... | chains = []
npop, de_iter, mc_iter, mc_burn, thin = 100, 200, 1500, 1000, 10
for flux, pb in zip(fluxes, filter_names):
lpf = LPFunction(pb, time, flux)
lpf.optimize(de_iter, npop)
lpf.sample(mc_burn, thin=thin)
lpf.sample(mc_iter, thin=thin, reset=True)
chains.append(lpf.sampler.chain.reshape([-1,lpf.sampler.chain.shape[-1]]))
chains = array(chains)
ids = [list(repeat(filter_names,chains.shape[1])),8*list(range(chains.shape[1]))]
dft = pd.DataFrame(data = concatenate([chains[i,:,:] for i in range(chains.shape[0])]),
index=ids, columns=lpf.ps.names)
dft['es'] = 10**df.loge * 1e6
dft['k'] = sqrt(dft.k2)
dft['u'] = 2*sqrt(dft.q1)*dft.q2
dft['v'] = sqrt(dft.q1)*(1-2*dft.q2)
dft = dft.drop('k2', axis=1) | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
The dataframe creation can probably be done in a nicer way, but we don't need to bother with that. The results are now in a multi-index dataframe, from where we can easily get the per-filter point estimates. | dft.loc['u'].describe()
with sb.axes_style('white'):
fig, axs = subplots(2,3, figsize=(13,6), sharex=True)
pars = 'tc rho u b k v'.split()
for i,p in enumerate(pars):
sb.violinplot(data=dft[p].unstack().T, inner='quartile', scale='width',
ax=axs.flat[i], order=filter_names)
axs.flat[i].text(0.95,0.9, p, transform=axs.flat[i].transAxes, ha='right')
fig.tight_layout() | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
As it is, the posterior distributions for different filters agree well with each other. However, the uncertainty in the radius ratio estimate decreases towards redder wavelengths. This is due to the reduced limb darkening, which allows us to estimate the true geometric radius ratio more accurately.
Finally, let's print the parameter estimates for each filter. We'll print the posterior medians with uncertainty estimates based on the central 68% posterior intervals. This matches the posterior mean and its 1-$\sigma$ uncertainty if the posterior is normal (which isn't really the case for many of the posteriors here). In real life, you'd want to report separate + and - uncertainties for the asymmetric posteriors, etc. | def ms(df,p,f):
p = array(percentile(df[p][f], [50,16,84]))
return p[0], abs(p[1:]-p[0]).mean()
def create_row(df,f,pars):
return ('<tr><td>{:}</td>'.format(f)+
''.join(['<td>{:5.4f} ± {:5.4f}</td>'.format(*ms(dft,p,f)) for p in pars])+
'</tr>')
def create_table(df):
pars = 'tc rho b k u v'.split()
return ('<table style="width:100%"><th>Filter</th>'+
''.join(['<th>{:}</th>'.format(p) for p in pars])+
''.join([create_row(df,f,pars) for f in filter_names])+
'</table>')
display(HTML(create_table(dft))) | notebooks/01_broadband_parameter_estimation.ipynb | hpparvi/PyTransit | gpl-2.0 |
2-D Gaussian Shells
To demonstrate more of the functionality afforded by our different sampling/bounding options we will demonstrate how these various features work using a set of 2-D Gaussian shells with a uniform prior over $[-6, 6]$. | # defining constants
r = 2. # radius
w = 0.1 # width
c1 = np.array([-3.5, 0.]) # center of shell 1
c2 = np.array([3.5, 0.]) # center of shell 2
const = math.log(1. / math.sqrt(2. * math.pi * w**2)) # normalization constant
# log-likelihood of a single shell
def logcirc(theta, c):
d = np.sqrt(np.sum((theta - c)**2, axis=-1)) # |theta - c|
return const - (d - r)**2 / (2. * w**2)
# log-likelihood of two shells
def loglike(theta):
return np.logaddexp(logcirc(theta, c1), logcirc(theta, c2))
# our prior transform
def prior_transform(x):
return 12. * x - 6.
# compute likelihood surface over a 2-D grid
xx, yy = np.meshgrid(np.linspace(-6., 6., 200), np.linspace(-6., 6., 200))
L = np.exp(loglike(np.dstack((xx, yy))))
# plot result
fig = plt.figure(figsize=(6,5))
plt.scatter(xx, yy, c=L, s=0.5)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.colorbar(label=r'$\mathcal{L}$'); | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Default Run
Let's first run with just the default set of dynesty options. | # run with all defaults
sampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=2, rstate=rstate)
sampler.run_nested()
res = sampler.results
from dynesty import plotting as dyplot
dyplot.cornerplot(sampler.results, span=([-6, 6], [-6, 6]), fig=plt.subplots(2, 2, figsize=(10, 10))); | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Bounding Options
Let's test out the bounding options available in dynesty (with uniform sampling) on these 2-D shells. To illustrate their baseline effectiveness, we will also disable the initial delay before our first update. | # bounding methods
bounds = ['none', 'single', 'multi', 'balls', 'cubes']
# run over each method and collect our results
bounds_res = []
for b in bounds:
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=2,
bound=b, sample='unif', nlive=500,
first_update={'min_ncall': 0.,
'min_eff': 100.}, rstate=rstate)
sys.stderr.flush()
sys.stderr.write('{}:\n'.format(b))
sys.stderr.flush()
t0 = time.time()
sampler.run_nested(dlogz=0.05)
t1 = time.time()
res = sampler.results
dtime = t1 - t0
sys.stderr.flush()
sys.stderr.write('\ntime: {0}s\n\n'.format(dtime))
bounds_res.append(sampler.results) | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
We can see the amount of overhead associated with 'balls' and 'cubes' is non-trivial in this case. This mainly comes from sampling from our bouding distributions, since accepting or rejecting a point requires counting all neighbors within some radius $r$, leading to frequent nearest-neighbor searches.
Runtime aside, we see that each method runs for a similar number of iterations and give similar logz values (with comparable errors). They thus appear to be unbiased both with respect to each other and with respect to the analytic solution ($\ln \mathcal{Z} = -1.75$).
To get a sense of what each of our bounds looks like, we can use some of dynesty's built-in plotting functionality. First, let's take a look at the case where we had no bounds ('none'). | from dynesty import plotting as dyplot
# initialize figure
fig, axes = plt.subplots(1, 1, figsize=(6, 6))
# plot proposals in corner format for 'none'
fg, ax = dyplot.cornerbound(bounds_res[0], it=2000, prior_transform=prior_transform,
show_live=True, fig=(fig, axes))
ax[0, 0].set_title('No Bound', fontsize=26)
ax[0, 0].set_xlim([-6.5, 6.5])
ax[0, 0].set_ylim([-6.5, 6.5]); | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Now let's examine the single and multi-ellipsoidal cases. | # initialize figure
fig, axes = plt.subplots(1, 3, figsize=(18, 6))
axes = axes.reshape((1, 3))
[a.set_frame_on(False) for a in axes[:, 1]]
[a.set_xticks([]) for a in axes[:, 1]]
[a.set_yticks([]) for a in axes[:, 1]]
# plot proposals in corner format for 'single'
fg, ax = dyplot.cornerbound(bounds_res[1], it=2000, prior_transform=prior_transform,
show_live=True, fig=(fig, axes[:, 0]))
ax[0, 0].set_title('Single', fontsize=26)
ax[0, 0].set_xlim([-6.5, 6.5])
ax[0, 0].set_ylim([-6.5, 6.5])
# plot proposals in corner format for 'multi'
fg, ax = dyplot.cornerbound(bounds_res[2], it=2000, prior_transform=prior_transform,
show_live=True, fig=(fig, axes[:, 2]))
ax[0, 0].set_title('Multi', fontsize=26)
ax[0, 0].set_xlim([-6.5, 6.5])
ax[0, 0].set_ylim([-6.5, 6.5]); | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Finally, let's take a look at our overlapping set of balls and cubes. | # initialize figure
fig, axes = plt.subplots(1, 3, figsize=(18, 6))
axes = axes.reshape((1, 3))
[a.set_frame_on(False) for a in axes[:, 1]]
[a.set_xticks([]) for a in axes[:, 1]]
[a.set_yticks([]) for a in axes[:, 1]]
# plot proposals in corner format for 'balls'
fg, ax = dyplot.cornerbound(bounds_res[3], it=1500, prior_transform=prior_transform,
show_live=True, fig=(fig, axes[:, 0]))
ax[0, 0].set_title('Balls', fontsize=26)
ax[0, 0].set_xlim([-6.5, 6.5])
ax[0, 0].set_ylim([-6.5, 6.5])
# plot proposals in corner format for 'cubes'
fg, ax = dyplot.cornerbound(bounds_res[4], it=1500, prior_transform=prior_transform,
show_live=True, fig=(fig, axes[:, 2]))
ax[0, 0].set_title('Cubes', fontsize=26)
ax[0, 0].set_xlim([-6.5, 6.5])
ax[0, 0].set_ylim([-6.5, 6.5]); | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Bounding Objects
By default, the nested samplers in dynesty save all bounding distributions used throughout the course of a run, which can be accessed within the results dictionary. More information on these distributions can be found in bounding.py. | # the proposals associated with our 'multi' bounds
bounds_res[2].bound | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Each bounding object has a host of additional functionality that the user can experiment with. For instance, the volume contained by the union of ellipsoids within MultiEllipsoid can be estimated using Monte Carlo integration (but otherwise are not computed by default). These volume estimates, combined with what fraction of our samples overlap with the unit cube (since our bounding distributions can exceed our prior bounds), can give us an idea of how effectively our multi-ellipsoid bounds are shrinking over time compared with the single-ellipsoid case. | # compute effective 'single' volumes
single_logvols = [0.] # unit cube
for bound in bounds_res[1].bound[1:]:
logvol = bound.logvol # volume
funit = bound.unitcube_overlap(rstate=rstate) # fractional overlap with unit cube
single_logvols.append(logvol +np.log(funit))
single_logvols = np.array(single_logvols)
# compute effective 'multi' volumes
multi_logvols = [0.] # unit cube
for bound in bounds_res[2].bound[1:]: # skip unit cube
logvol, funit = bound.monte_carlo_logvol(rstate=rstate, return_overlap=True)
multi_logvols.append(logvol +np.log( funit)) # numerical estimate via Monte Carlo methods
multi_logvols = np.array(multi_logvols)
# plot results as a function of ln(volume)
plt.figure(figsize=(12,6))
plt.xlabel(r'$-\ln X_i$')
plt.ylabel(r'$\ln V_i$')
# 'single'
res = bounds_res[1]
x = -res.logvol # ln(prior volume)
it = res.bound_iter # proposal idx at given iteration
y = single_logvols[it] # corresponding ln(bounding volume)
plt.plot(x, y, lw=3, label='single')
# 'multi'
res = bounds_res[2]
x, it = -res.logvol, res.bound_iter
y = multi_logvols[it]
plt.plot(x, y, lw=3, label='multi')
plt.legend(loc='best', fontsize=24); | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
We see that in the beginning, only a single ellipsoid is used. After some bounding updates have been made, there is enough of an incentive to split the proposal into several ellipsoids. Although the initial ellipsoid decompositions can be somewhat unstable (i.e. bootstrapping can give relatively large volume expansion factors), over time this process leads to a significant decrease in effective overall volume.
Sampling Options
Let's test out the sampling options available in dynesty (with 'multi' bounding) on our 2-D shells defined above. | # bounding methods
sampling = ['unif', 'rwalk', 'slice', 'rslice', 'hslice']
# run over each method and collect our results
sampling_res = []
for s in sampling:
sampler = dynesty.NestedSampler(loglike, prior_transform, ndim=2,
bound='multi', sample=s, nlive=1000,
rstate=rstate)
sys.stderr.flush()
sys.stderr.write('{}:\n'.format(s))
sys.stderr.flush()
t0 = time.time()
sampler.run_nested(dlogz=0.05)
t1 = time.time()
res = sampler.results
dtime = t1 - t0
sys.stderr.flush()
sys.stderr.write('\ntime: {0}s\n\n'.format(dtime))
sampling_res.append(sampler.results) | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
As expected, uniform sampling in 2-D is substantially more efficient that other more complex alternatives (especially 'hslice', which is computing numerical gradients!). Regardless of runtime, however, we see that each method runs for a similar number of iterations and gives similar logz values (with comparable errors). They thus appear to be unbiased both with respect to each other and with respect to the analytic solution ($\ln\mathcal{Z} = −1.75$).
Bootstrapping
One of the largest overheads associated with nested sampling is the time needed to propose new bounding distributions. To avoid bounding distributions that fail to properly encompass the remaining likelihood, dynesty automatically expands the volume of all bounding distributions by an enlargement factor (enlarge). By default, this factor is set to a constant value of 1.25. However, it can also be determined in real time using bootstrapping (over the set of live points) following the scheme outlined in Buchner (2014).
Bootstrapping these expansion factors can help to ensure accurate evidence estimation when the proposals rely heavily on the size of an object rather than the overall shape, such as when proposing new points uniformly within their boundaries. In theory, it also helps to prevent mode "death": if occasionally a secondary mode disappears when bootstrapping, the existing bounds would be expanded to theoretically encompass it. In practice, however, most modes are widely separated, leading enormous expansion factors whenever any possible instance of mode death may occur.
Bootstrapping thus imposes a de facto floor on the number of acceptable live points to avoid mode death for any given problem, which can often be quite large for many problems. While these numbers are often justified, they can drastically reduce the raw sampling efficiency until such a target threshold of live points is reached.
We showcase this behavior below by illustrating the performance of our NestedSampler on several N-D Gaussian shells with and without bootstrapping. | # setup for running tests over gaussian shells in arbitrary dimensions
def run(ndim, bootstrap, bound, method, nlive):
"""Convenience function for running in any dimension."""
c1 = np.zeros(ndim)
c1[0] = -3.5
c2 = np.zeros(ndim)
c2[0] = 3.5
f = lambda theta: np.logaddexp(logcirc(theta, c1), logcirc(theta, c2))
sampler = dynesty.NestedSampler(f, prior_transform, ndim,
bound=bound, sample=method, nlive=nlive,
bootstrap=bootstrap,
first_update={'min_ncall': 0.,
'min_eff': 100.},
rstate=rstate)
sampler.run_nested(dlogz=0.5)
return sampler.results
# analytic ln(evidence) values
ndims = [2, 5, 10]
analytic_logz = {2: -1.75,
5: -5.67,
10: -14.59}
# results with bootstrapping
results = []
times = []
for ndim in ndims:
t0 = time.time()
sys.stderr.flush()
sys.stderr.write('{} dimensions:\n'.format(ndim))
sys.stderr.flush()
res = run(ndim, 20, 'multi', 'unif', 2000)
sys.stderr.flush()
curdt = time.time() - t0
times.append(curdt)
sys.stderr.write('\ntime: {0}s\n\n'.format(curdt))
results.append(res)
# results without bootstrapping
results2 = []
times2 = []
for ndim in ndims:
t0 = time.time()
sys.stderr.flush()
sys.stderr.write('{} dimensions:\n'.format(ndim))
sys.stderr.flush()
res = run(ndim, 0, 'multi', 'unif', 2000)
sys.stderr.flush()
curdt = time.time() - t0
times2.append(curdt)
sys.stderr.write('\ntime: {0}s\n\n'.format(curdt))
results2.append(res)
print('With bootstrapping:')
print("D analytic logz logzerr nlike eff(%) time")
for ndim, curt, res in zip(ndims, times, results):
print("{:2d} {:6.2f} {:6.2f} {:4.2f} {:6d} {:5.2f} {:6.2f}"
.format(ndim, analytic_logz[ndim], res.logz[-1], res.logzerr[-1],
sum(res.ncall), res.eff, curt))
print('\n')
print('Without bootstrapping:')
print("D analytic logz logzerr nlike eff(%) time")
for ndim, curt, res in zip(ndims, times2, results2):
print("{:2d} {:6.2f} {:6.2f} {:4.2f} {:6d} {:5.2f} {:6.2f}"
.format(ndim, analytic_logz[ndim], res.logz[-1], res.logzerr[-1],
sum(res.ncall), res.eff, curt)) | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
While our results are comparable between both cases, in higher dimensions multi-ellipsoid bounding distributions can sometimes be over-constrained, leading to biased results. Other sampling methods mitigate this problem by sampling conditioned on the ellipsoid axes, and so only depends on ellipsoid shapes, not sizes. 'rslice' is demonstrated below. | # adding on slice sampling
results3 = []
times3 = []
for ndim in ndims:
t0 = time.time()
sys.stderr.flush()
sys.stderr.write('{} dimensions:\n'.format(ndim))
sys.stderr.flush()
res = run(ndim, 0, 'multi', 'rslice', 2000)
sys.stderr.flush()
curdt = time.time() - t0
times3.append(curdt)
sys.stderr.write('\ntime: {0}s\n\n'.format(curdt))
results3.append(res)
print('Random Slice sampling:')
print("D analytic logz logzerr nlike eff(%) time")
for ndim, curt, res in zip([2, 5, 10, 20], times3, results3):
print("{:2d} {:6.2f} {:6.2f} {:4.2f} {:8d} {:5.2f} {:6.2f}"
.format(ndim, analytic_logz[ndim], res.logz[-1], res.logzerr[-1],
sum(res.ncall), res.eff, curt)) | demos/Examples -- Gaussian Shells.ipynb | joshspeagle/dynesty | mit |
Refining the dataset
In this section, we add some additional time-based information to the DataFrame to accomplish our tasks.
Adding weekdays
First, we add the information about the weekdays based on the weekday_name information of the timestamp_local column. Because we want to preserve the order of the weekdays, we convert the weekday entries to a Categorial data type, too. The order of the weekdays is taken from the calendar module.
Note: We can do this so easily because we have such a large amount of data where every weekday occurs. If we can't be sure to have a continuous sequence of weekdays, we have to use something like the pd.Grouper method to fill in missing weekdays. | import calendar
git_authors['weekday'] = git_authors["timestamp_local"].dt.weekday_name
git_authors['weekday'] = pd.Categorical(
git_authors['weekday'],
categories=calendar.day_name,
ordered=True)
git_authors.head() | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Adding working hours
For the working hour analysis, we extract the hour information from the timestamp_local column.
Note: Again, we assume that every hour is in the dataset. | git_authors['hour'] = git_authors['timestamp_local'].dt.hour
git_authors.head() | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Analyzing the data
With the prepared git_authors DataFrame, we are now able to deliver insights into the past years of development.
Developers' timezones
First, we want to know where the developers roughly live. For this, we plot the values of the timezone columns as a pie chart. | %matplotlib inline
timezones = git_authors['timezone'].value_counts()
timezones.plot(
kind='pie',
figsize=(7,7),
title="Developers' timezones",
label="") | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Result
The majority of the developers' commits come from the time zones +0100, +0200 and -0700. With most commits coming probably from the West Coast of the USA, this might just be an indicator that Linus Torvalds lives there ;-) . But there are also many commits from developers within Western Europe.
Weekdays with the most commits
Next, we want to know on which days the developers are working during the week. We count by the weekdays but avoid sorting the results to keep the order along with our categories. We plot the result as a standard bar chart. | ax = git_authors['weekday'].\
value_counts(sort=False).\
plot(
kind='bar',
title="Commits per weekday")
ax.set_xlabel('weekday')
ax.set_ylabel('# commits') | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Result
Most of the commits occur during normal working days with a slight peak on Wednesday. There are relatively few commits happening on weekends.
Working behavior of the main contributor
It would be very interesting and easy to see when Linus Torvalds (the main contributor to Linux) is working. But we won't do that because the yet unwritten codex of Software Analytics does tell us that it's not OK to analyze a single person's behavior – especially when such an analysis is based on an uncleaned dataset as we have it here.
Usual working hours
To find out about the working habits of the contributors, we group the commits by hour and count the entries (in this case we choose author) to see if there are any irregularities. Again, we plot the results with a standard bar chart. | ax = git_authors\
.groupby(['hour'])['author']\
.count().plot(kind='bar')
ax.set_title("Distribution of working hours")
ax.yaxis.set_label_text("# commits")
ax.xaxis.set_label_text("hour") | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Result
The distribution of the working hours is interesting:
- First, we can clearly see that there is a dent around 12:00. So this might be an indicator that developers have lunch at regular times (which is a good thing IMHO).
- Another not so typical result is the slight rise after 20:00. This could be interpreted as the development activity of free-time developers that code for Linux after their day-time job.
- Nevertheless, most of the developers seem to get a decent amount of sleep indicated by low commit activity from 1:00 to 7:00.
Signs of overtime
At last, we have a look at possible overtime periods by creating a simple model. We first group all commits on a weekly basis per authors. As grouping function, we choose max() to get the hour where each author committed at latest per week. | latest_hour_per_week = git_authors.groupby(
[
pd.Grouper( key='timestamp_local', freq='1w'),
'author'
]
)[['hour']].max()
latest_hour_per_week.head() | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
Next, we want to know if there were any stressful time periods that forced the developers to work overtime over a longer period of time. We calculate the mean of all late stays of all authors for each week. | mean_latest_hours_per_week = \
latest_hour_per_week \
.reset_index().groupby('timestamp_local').mean()
mean_latest_hours_per_week.head() | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
We also create a trend line that shows how the contributors are working over the span of the past years. We use the polyfit function from numpy for this which needs a numeric index to calculate the polynomial coefficients later on. We then calculate the coefficients with a three-dimensional polynomial based on the hours of the mean_latest_hours_per_week DataFrame. For visualization, we decrease the number of degrees and calculate the y-coordinates for all weeks that are encoded in numeric_index. We store the result in the mean_latest_hours_per_week DataFrame. | import numpy as np
numeric_index = range(0, len(mean_latest_hours_per_week))
coefficients = np.polyfit(numeric_index, mean_latest_hours_per_week.hour, 3)
polynomial = np.poly1d(coefficients)
ys = polynomial(numeric_index)
mean_latest_hours_per_week['trend'] = ys
mean_latest_hours_per_week.head() | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
At last, we plot the hour results of the mean_latest_hours_per_week DataFrame as well as the trend data in one line plot. | ax = mean_latest_hours_per_week[['hour', 'trend']].plot(
figsize=(10, 6),
color=['grey','blue'],
title="Late hours per weeks")
ax.set_xlabel("time")
ax.set_ylabel("hour") | notebooks/Developers' Habits (Linux Edition).ipynb | feststelltaste/software-analytics | gpl-3.0 |
FLE
In this script the CONDENSATION is done for rightward and leftward motion of a dot stimulus, at different levels of noise. also for flashing stimuli needed for simulation of flash initiated and flash_terminated FLEs.
The aim is to generate generate (Berry et al 99)'s figure 2: shifting RF position in the direction of motion.
Initialization of notebook | %%writefile experiment_SI_controls.py
"""
A bunch of control runs
"""
import MotionParticlesFLE as mp
gen_dot = mp.generate_dot
import numpy as np
import os
from default_param import *
image = {}
experiment = 'SI'
N_scan = 5
base = 10.
#mp.N_trials = 4
for stimulus_tag, im_arg in zip(stim_labels, stim_args):
#for stimulus_tag, im_arg in zip(stim_labels[1], stim_args[1]):
#for D_x, D_V, label in zip([mp.D_x, PBP_D_x], [mp.D_V, PBP_D_V], ['MBP', 'PBP']):
for D_x, D_V, label in zip([mp.D_x], [mp.D_V], ['MBP']):
im_arg.update(D_V=D_V, D_x=D_x)
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
D_x=im_arg['D_x']*np.logspace(-2, 2, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
D_V=im_arg['D_V']*np.logspace(-2, 2, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
sigma_motion=mp.sigma_motion*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
K_motion=mp.K_motion*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
dot_size=im_arg['dot_size']*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
sigma_I=mp.sigma_I*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
im_noise=mp.im_noise*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
sigma_noise=mp.sigma_noise*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
p_epsilon=mp.p_epsilon*np.logspace(-1, 1, N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
v_init=mp.v_init*np.logspace(-1., 1., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
v_prior=np.logspace(-.3, 5., N_scan, base=base))
_ = mp.figure_image_variable(
os.path.join(mp.figpath, experiment + '-' + stimulus_tag + '-' + label),
N_X, N_Y, N_frame, gen_dot, order=None, do_figure=do_figure, do_video=do_video, N_quant_X=N_quant_X, N_quant_Y=N_quant_Y,
fixed_args=im_arg,
resample=np.linspace(0.1, 1., N_scan, endpoint=True))
%run experiment_SI_controls.py | notebooks/SI_controls.ipynb | laurentperrinet/Khoei_2017_PLoSCB | mit |
TODO : show results with a widget | !git commit -m' SI controls ' ../notebooks/SI_controls* ../scripts/experiment_SI_controls* | notebooks/SI_controls.ipynb | laurentperrinet/Khoei_2017_PLoSCB | mit |
Multi-layered perceptron (feed forward network)
Each hiden layer is formed by neurons called perceptrons
A perceptron is a binary linear classifier
inputs: a flat array $x_i$
one output per neuron j: $y_j$
a transformation of input into output (activation function):
linear separator
sigmoid function
$z_j= \sum_i {w_{ij} x_i} + b_j$
$y_j = f(z_j) = \frac{1}{1 + e^{-z_j}}$ | from IPython.display import Image
Image(url= "../img/perceptron.png", width=400, height=400) | day3/DL1_FFN.ipynb | grokkaine/biopycourse | cc0-1.0 |
input layer: sequential (flattened) image
hidden layers: perceptrons
output layer: softmax | from IPython.display import Image
Image(url= "../img/ffn.png", width=400, height=400)
from tensorflow.keras import models
from tensorflow.keras import layers
# defining the NN structure
network = models.Sequential()
network.add(layers.Dense(512, activation='sigmoid', input_shape=(28 * 28,)))
network.add(layers.Dense(512, activation='sigmoid', input_shape=(512,)))
network.add(layers.Dense(10, activation='softmax'))
network.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
network.summary() | day3/DL1_FFN.ipynb | grokkaine/biopycourse | cc0-1.0 |
Learning process
NNs are supervised learning structures!
- forward propagation: all training data is fed to the network and y is predicted
- estimate the loss: difference between prediction and label
- backpropagation: the loss information is propagated backwards layer by layer, and the neuron weights are adjusted
- global optimization: the parameters (weights and biases) must be adjusted in such a way that the loss function presented above is minimized. | from IPython.display import Image
Image(url= "../img/NN_learning.png", width=400, height=400) | day3/DL1_FFN.ipynb | grokkaine/biopycourse | cc0-1.0 |
Gradient descent (main optimization technique)
The weights in small increments with the help of the calculation of the derivative (or gradient) of the loss function, which allows us to see in which direction “to descend” towards the global minimum. Most optimizers are based on gradient descent, an algorithm that is very eficient on GPUs today, but gives local optima.
Epochs and batches. The optimization is done in general in batches of data in the successive iterations (epochs) of all the dataset that we pass to the network in each iteration. "epochs" are complete runs through the dataset. Batches are used because the whole dataset is hard to be passed through the network at once.
- 469 number of batches
128 * 469 ~= 60000 images (number of samples) | network.fit(train_images, train_labels, epochs=5, batch_size=128)
test_loss, test_acc = network.evaluate(test_images, test_labels)
print(test_loss, test_acc) | day3/DL1_FFN.ipynb | grokkaine/biopycourse | cc0-1.0 |
Observations:
- slightly smaller accuracy on the test data compared to training data (model overfits on the training data)
Questions:
- Why do we need several epochs?
- What is the main computer limitation when it comes to batches?
- How many epochs are needed, and what is the danger associated with using too many or too few?
Reading:
- https://medium.com/onfido-tech/machine-learning-101-be2e0a86c96a
Run a prediction: | import matplotlib.pyplot as plt
import numpy as np
prediction=network.predict(test_images[0:9])
y_true_cls = np.argmax(test_labels[0:9], axis=1)
y_pred_cls = np.argmax(prediction, axis=1)
fig, axes = plt.subplots(3, 3, figsize=(8,8))
fig.subplots_adjust(hspace=0.5, wspace=0.5)
for i, ax in enumerate(axes.flat):
ax.imshow(test_images[i].reshape(28,28), cmap = 'BuGn')
xlabel = "True: {0}, Pred: {1}".format(y_true_cls[i], y_pred_cls[i])
ax.set_xlabel(xlabel)
ax.set_xticks([])
ax.set_yticks([])
plt.show() | day3/DL1_FFN.ipynb | grokkaine/biopycourse | cc0-1.0 |
Historical essentials
Deep learning, from an algorithmic perspective, is the application of advanced multi-layered filters to learn hidden features in data representation.
Many of the methods that are used today in DL, such as most neural network types (and not only), went through a 20 years long pause due to the fact that the computing machines avalable at the era were too slow to produce wanted results.
It was several things that precipitated their return in 2010:
- Graphical processors. A GPU has thousands of cores that are specialized in concomitant linear operations. This provided the infrastructure on which "deep" algorithms perform the best.
- The maturity of cloud computing. This enables third parties to use DL methodologies at scale, and with small operating costs.
- Big data. Most AI needs models to be trained on a lot of data, thus AI needs a sufficient level of data availability. The massive acumulation of data (not only in biology) is a very recent phenomenon.
Book reccomendation:
- http://www.deeplearningbook.org/ (free to read)
Text classification
The purpose is to cathegorize films into good or bad based on their reviews. Data is vectorized into binary.
layer activation
What happens during layer activation? Basically a set of tensor operations are being performed. A simplistic way to understand this is operations done on array of matrices, while the atomic operation would be:
output = relu(dot(W, input) + b)
, where the weight matrix W shape is (input_dim (10000), 16) and b is a bias term. In linear algebra terms, this will project the input data onto a 16 dimensional space. The more dimensions, the more features, the more confusion, and more computing cost BUT also more complex representations.
Task:
- Perform sentiment analysis using the code below!
- Plot the accuracy vs loss in both the training and validation data, on the history.history dictionary. Use more epochs. What do you notice? How many epochs do you think you need? What if you monitor for 100000 epochs?
- We were using 2 hidden layers. Try to use 1 or 3 hidden layers and see how it affects validation and test accuracy.
- Adjust the learning rate.
- Try to use layers with more hidden units or less hidden units: 32 units, 64 units...
- Try to use the mse loss function instead of binary_crossentropy.
- Try to use the tanh activation (an activation that was popular in the early days of neural networks) instead of relu. | import numpy as np
from keras.datasets import imdb
from keras import models
from keras import layers
from keras import optimizers
from keras import losses
from keras import metrics
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words=10000)
print(max([max(sequence) for sequence in train_data]))
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
x_train = vectorize_sequences(train_data)
x_test = vectorize_sequences(test_data)
y_train = np.asarray(train_labels).astype('float32')
y_test = np.asarray(test_labels).astype('float32')
model = models.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizers.RMSprop(lr=0.001),
loss=losses.binary_crossentropy,
metrics=[metrics.binary_accuracy])
x_val = x_train[:10000]
partial_x_train = x_train[10000:]
y_val = y_train[:10000]
partial_y_train = y_train[10000:]
history = model.fit(partial_x_train,
partial_y_train,
epochs=5,
batch_size=512,
validation_data=(x_val, y_val))
p = model.predict(x_test)
print(history.history) | day3/DL1_FFN.ipynb | grokkaine/biopycourse | cc0-1.0 |
Let us set up the problem, discretization and solver details. The number of divisions along each dimension is given as a power of two function of the number of levels. In principle this is not required, but having it makes the inter-grid transfers easy.
The coarsest problem is going to have a 2-by-2 grid. | #input
max_cycles = 30
nlevels = 6
NX = 2*2**(nlevels-1)
NY = 2*2**(nlevels-1)
tol = 1e-15
#the grid has one layer of ghost cellss
uann=np.zeros([NX+2,NY+2])#analytical solution
u =np.zeros([NX+2,NY+2])#approximation
f =np.zeros([NX+2,NY+2])#RHS
#calcualte the RHS and exact solution
DX=1.0/NX
DY=1.0/NY
xc=np.linspace(0.5*DX,1-0.5*DX,NX)
yc=np.linspace(0.5*DY,1-0.5*DY,NY)
XX,YY=np.meshgrid(xc,yc,indexing='ij')
uann[1:NX+1,1:NY+1]=Uann(XX,YY)
f[1:NX+1,1:NY+1] =source(XX,YY) | .ipynb_checkpoints/Making_a_Preconditioner-vectorized-checkpoint.ipynb | AbhilashReddyM/GeometricMultigrid | mit |
Lets look at what happens with and without the preconditioner. | A = Laplace(NX,NY)
#Exact solution and RHS
uex=np.random.rand(NX*NY,1)
b=A*uex
#Multigrid Preconditioner
M=MGVP(NX,NY,nlevels)
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(bicgstab,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error))) | .ipynb_checkpoints/Making_a_Preconditioner-vectorized-checkpoint.ipynb | AbhilashReddyM/GeometricMultigrid | mit |
Without the preconditioner ~150 iterations were needed, where as with the V-cycle preconditioner the solution was obtained in far fewer iterations. Let's try with CG: | u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500)
print('Without preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error)))
u,info,iters=solve_sparse(cg,A,b,tol=1e-10,maxiter=500,M=M)
print('With preconditioning. status:',info,', Iters: ',iters)
error=uex-u
print('error :',np.max(np.abs(error))) | .ipynb_checkpoints/Making_a_Preconditioner-vectorized-checkpoint.ipynb | AbhilashReddyM/GeometricMultigrid | mit |
Load raw data | df = pd.read_stata('/gh/data/hcmst/1.dta')
# df2 = pd.read_stata('/gh/data/hcmst/2.dta')
# df3 = pd.read_stata('/gh/data/hcmst/3.dta')
# df = df1.merge(df2, on='caseid_new')
# df = df.merge(df3, on='caseid_new')
df.head(2) | hcmst/process_raw_data.ipynb | srcole/qwm | mit |
Select and rename columns | rename_cols_dict = {'ppage': 'age', 'ppeducat': 'education',
'ppethm': 'race', 'ppgender': 'sex',
'pphouseholdsize': 'household_size', 'pphouse': 'house_type',
'hhinc': 'income', 'ppmarit': 'marital_status',
'ppmsacat': 'in_metro', 'ppreg4': 'usa_region',
'pprent': 'house_payment', 'children_in_hh': 'N_child',
'ppwork': 'work', 'ppnet': 'has_internet',
'papglb_friend': 'has_gay_friendsfam', 'pppartyid3': 'politics',
'papreligion': 'religion', 'qflag': 'in_relationship',
'q9': 'partner_age', 'duration': 'N_minutes_survey',
'glbstatus': 'is_lgb', 's1': 'is_married',
'partner_race': 'partner_race', 'q7b': 'partner_religion',
'q10': 'partner_education', 'US_raised': 'USA_raised',
'q17a': 'N_marriages', 'q17b': 'N_marriages2', 'coresident': 'cohabit',
'q21a': 'age_first_met', 'q21b': 'age_relationship_begin',
'q21d': 'age_married', 'q23': 'relative_income',
'q25': 'same_high_school', 'q26': 'same_college',
'q27': 'same_hometown', 'age_difference': 'age_difference',
'q34':'relationship_quality',
'q24_met_online': 'met_online', 'met_through_friends': 'met_friends',
'met_through_family': 'met_family', 'met_through_as_coworkers': 'met_work'}
df = df[list(rename_cols_dict.keys())]
df.rename(columns=rename_cols_dict, inplace=True)
# Process number of marriages
df['N_marriages'] = df['N_marriages'].astype(str).replace({'nan':''}) + df['N_marriages2'].astype(str).replace({'nan':''})
df.drop('N_marriages2', axis=1, inplace=True)
df['N_marriages'] = df['N_marriages'].replace({'':np.nan, 'once (this is my first marriage)': 'once', 'refused':np.nan})
df['N_marriages'] = df['N_marriages'].astype('category')
# Clean entries to make simpler
df['in_metro'] = df['in_metro']=='metro'
df['relationship_excellent'] = df['relationship_quality'] == 'excellent'
df['house_payment'].replace({'owned or being bought by you or someone in your household': 'owned',
'rented for cash': 'rent',
'occupied without payment of cash rent': 'free'}, inplace=True)
df['race'].replace({'white, non-hispanic': 'white',
'2+ races, non-hispanic': 'other, non-hispanic',
'black, non-hispanic': 'black'}, inplace=True)
df['house_type'].replace({'a one-family house detached from any other house': 'house',
'a building with 2 or more apartments': 'apartment',
'a one-family house attached to one or more houses': 'house',
'a mobile home': 'mobile',
'boat, rv, van, etc.': 'mobile'}, inplace=True)
df['is_not_working'] = df['work'].str.contains('not working')
df['has_internet'] = df['has_internet'] == 'yes'
df['has_gay_friends'] = np.logical_or(df['has_gay_friendsfam']=='yes, friends', df['has_gay_friendsfam']=='yes, both')
df['has_gay_family'] = np.logical_or(df['has_gay_friendsfam']=='yes, relatives', df['has_gay_friendsfam']=='yes, both')
df['religion_is_christian'] = df['religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)',
'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox'])
df['religion_is_none'] = df['religion'].isin(['none'])
df['in_relationship'] = df['in_relationship']=='partnered'
df['is_lgb'] = df['is_lgb']=='glb'
df['is_married'] = df['is_married']=='yes, i am married'
df['partner_race'].replace({'NH white': 'white', ' NH black': 'black',
' NH Asian Pac Islander':'other', ' NH Other': 'other', ' NH Amer Indian': 'other'}, inplace=True)
df['partner_religion_is_christian'] = df['partner_religion'].isin(['protestant (e.g., methodist, lutheran, presbyterian, episcopal)',
'catholic', 'baptist-any denomination', 'other christian', 'pentecostal', 'mormon', 'eastern orthodox'])
df['partner_religion_is_none'] = df['partner_religion'].isin(['none'])
df['partner_education'] = df['partner_education'].map({'hs graduate or ged': 'high school',
'some college, no degree': 'some college',
"associate degree": "some college",
"bachelor's degree": "bachelor's degree or higher",
"master's degree": "bachelor's degree or higher",
"professional or doctorate degree": "bachelor's degree or higher"})
df['partner_education'].fillna('less than high school', inplace=True)
df['USA_raised'] = df['USA_raised']=='raised in US'
df['N_marriages'] = df['N_marriages'].map({'never married': '0', 'once': '1', 'twice': '2', 'three times': '3+', 'four or more times':'3+'})
df['relative_income'].replace({'i earned more': 'more', 'partner earned more': 'less',
'we earned about the same amount': 'same', 'refused': np.nan}, inplace=True)
df['same_high_school'] = df['same_high_school']=='same high school'
df['same_college'] = df['same_college']=='attended same college or university'
df['same_hometown'] = df['same_hometown']=='yes'
df['cohabit'] = df['cohabit']=='yes'
df['met_online'] = df['met_online']=='met online'
df['met_friends'] = df['met_friends']=='meet through friends'
df['met_family'] = df['met_family']=='met through family'
df['met_work'] = df['met_family']==1
df['age'] = df['age'].astype(int)
for c in df.columns:
if str(type(df[c])) == 'object':
df[c] = df[c].astype('category')
df.head()
df.to_csv('/gh/data/hcmst/1_cleaned.csv') | hcmst/process_raw_data.ipynb | srcole/qwm | mit |
Distributions | for c in df.columns:
print(df[c].value_counts())
# Countplot if categorical; distplot if numeric
from pandas.api.types import is_numeric_dtype
plt.figure(figsize=(40,40))
for i, c in enumerate(df.columns):
plt.subplot(7,7,i+1)
if is_numeric_dtype(df[c]):
sns.distplot(df[c].dropna(), kde=False)
else:
sns.countplot(y=c, data=df)
plt.savefig('temp.png')
sns.barplot(x='income', y='race', data=df) | hcmst/process_raw_data.ipynb | srcole/qwm | mit |
Since Series and DataFrame are used frequently, they should be imported directly by name.
Panda Data Structures
Series
A Series is basically a one-dimensional array with indices.
You create a simplest Series like this: | ps = Series([4,2,1,3])
print ps | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
Get values and indeces like this: | print ps.values
print ps.index
ps[0] | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
To use a custom index, do this: | ps2 = Series([4, 7, -1, 8], ['a','b','c','d'])
ps2 | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
Often, you want to create Series from python dict | ps3 = {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000}
ps3 | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
DataFrame
A DataFrame represents a tabular structure. It can be thought of as a dict of Series.
A DataFrame can be constructed from a dict of equal-length lists | data = {'state': ['Ohio', 'Ohio', 'Ohio', 'Nevada', 'Nevada'], 'year': [2000, 2001, 2002, 2001, 2002],
'pop': [1.5, 1.7, 3.6, 2.4, 2.9]}
df = DataFrame(data)
df | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
You can specify a sequence of columns like so: | DataFrame(data, columns=['year', 'state', 'pop']) | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
In addition to index and values, DataFrame has columns | print df.index
print
print df.values
print
print df.columns | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
You can get a specific column like this: | df['state'] | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
Rows can be retrieved using the ix method: | df.ix[0] | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
Another common form of data to create DataFrame is a nested dict of dicts OR nested dict of Series: | pop = {'Nevada': {2001: 2.4, 2002: 2.9}, 'Ohio': {2000: 1.5, 2001: 1.7, 2002: 3.6}}
df2 = DataFrame(pop)
df2 | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
You can pass explicit index when creating DataFrame: | df3=DataFrame(pop, index=[2001, 2002, 2003])
df3 | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
If a DataFrame’s index and columns have their name attributes set, these will also be displayed: | df2.index.name = 'year'
df2.columns.name = 'state'
df2 | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
The 3rd common data input structures is a list of dicts or Series: | films = [{'star': 9.3, 'title': 'The Shawshank Redemption', 'content_rating': 'R'},
{'star': 9.2, 'title': 'The Godfather', 'content_rating': 'R'},
{'star': 9.1, 'title': 'The Godfather: Part II', 'content_rating': 'R'}
]
df3 = DataFrame(films)
df3 | pandas/Learn-Pandas-Completely.ipynb | minhhh/charts | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.